A Quantum Computing-Based Accelerated Model for Image Classiﬁcation Using a Parallel Pipeline Encoded Inception Module

: Image classiﬁcation is typically a research area that trains an algorithm for accurately identifying subjects in images that have never been seen before. Training a model to recognize images within a dataset is signiﬁcant as image classiﬁcation generally has several applications in medicine, face detection, image reconstruction, etc. In spite of such applications, the main difﬁculty in this area involves the computation in the classiﬁcation process, which is vast, leading to slow speed of classiﬁcation. Moreover, as conventional image classiﬁcation approaches have fallen short in terms of attaining high accuracy, an optimal model is needed. To resolve this, quantum computing has been developed. Due to their parallel computing ability, quantum-based algorithms could accomplish the classiﬁcation of vast amounts of image data. This has theoretically conﬁrmed the feasibility and advantages of incorporating a quantum computing-based system with traditional image classiﬁcation methodologies. Considering this, the present study quantizes the layers of the proposed parallel encoded Inception module to improvise the network performance. This study exposes the ﬂexibility of DL (deep learning)-based quantum state computational methodologies for missing computations by creating a pipeline for denoising, state estimation, and imputation. Furthermore, controlled parameterized rotations are regarded for entanglement, a vital component in quantum perceptron structure. The proposed approach not only possesses the unique features of quantum mechanics, but it also maintains the weight sharing of the kernel. Finally, the MNIST (Modiﬁed National Institute of Standards and Technology) and Fashion MNIST image classiﬁcation outcomes are attained by measuring the quantum state. Overall performance is assessed to prove its effectiveness in image classiﬁcation.


Introduction
Quantum computing is a fast-growing technology that utilizes the laws of quantum mechanics to solve the complicated issues of classical computers. Quantum computers have the ability to solve a particular type of issue quickly in comparison with classical computers, utilizing the advantage of the effects of quantum mechanics such as quantum interference and superposition. Some applications of quantum computers offer a speed boost, which includes ML (machine learning), simulation, and optimization. The improvements create a wide range of opportunities in every aspect of modern life, including binary classification. The idea of classification is presented to capture the abstraction and extent of the problem space that belongs to the given object. The provided set of classes attempts to decide the category of the given object. In many data science disciplines such as information retrieval, recommender system, and data mining, the main process of ML is classification. The efficiency of the classification methods depends on the theory of sets, vector spaces, and probability. The improvement in the efficiency of the classifier has been an important research topic in ML in recent decades [1]. As the data are growing constantly and exponentially, the core challenge lies in the identification of innovative methods that exist in ML. The structure of QM (quantum mechanics) has acted as a way to resolve these challenges. Many physicists have established the power of QM for information processing. While classical computers use two states, namely 0 and 1, the quantum state superposition |0 and |1 has been used by quantum computers to notice various paths of measurements. Likewise, QM shifts the computational model from bits to quantum bits, which are referred to as qubits. ML algorithms are inspired by the quantum mechanical framework. Hence, it is necessary to use QM in quantum computers to perform the ML processes. In recent decades, different algorithms that use quantum information theory have been widely utilized to focus on different problems such as clustering and classification. A sub-field of ML that utilizes quantum computers to execute the tasks of machine learning is called quantum machine learning. Quantum machine learning has attracted attention from investigators in recent decades. However, recently, various studies have revealed that quantum neural networks have the ability to attain excellent classification for certain datasets. Quantum neural networks is a broad research topic that introduces neural network models on the basis of the mechanics of quantum computing. A quantum neural network has several advantages over the traditional classification models, such as testing accuracy, excellent convergence, and a higher learning rate. There are various techniques of quantum classification algorithms, including quantum support vector machines, quantum nearest neighbor classification, and quantum discriminant analysis. The idea of the quantum k-nearest neighbor algorithm is similar to the concept of traditional k-nearest neighbor. The quantum machine learning model used to categorize the classical data by using quantum resources is called a quantum support vector machine. The discriminant analysis that was created based on the techniques of quantum computing is called quantum discriminant analysis. However, QNNs yield the better accuracy in image classification tasks compared to other classification algorithms. Additionally, QNNs work well with the MNIST dataset due to the increasing capability of quantum computers.
Concurrently, neural networks are considered optimal for learning in conventional computers, yet they possess some drawbacks [2], such as overfitting issues and NPcomplete problems. Hence, these limitations have paved the way for quantum-based deep learning (DL) approaches. It is not correct to say that the quantum-based deep learning techniques can always resolve the issue of overfitting. Overfitting is the most frequent and common issue that happens in traditional DL models and is when the model becomes too multifaceted and complicated. This will decrease the performance of the model. Sometimes, the overfitting issue also occurs in quantum-based DL models. However, it is true that significant aspects of quantum computing, such as the utilization of quantum entanglement and the ability to execute computations in parallel, will result in the improvement of advanced techniques for decreasing the overfitting issue in DL models. Therefore, it is recommended to not state that quantum-based DL approaches can entirely resolve the overfitting issues, but instead that they can lead to advanced techniques for enhancing the efficiency and performance of DL models. However, many proposals cannot be applied to modern hardware, and some of them have no correspondence with traditional artificial neural networks. Therefore, deep learning approaches have attained more interest, performing better in near-term quantum computers [3,4]. Though the models have the potential to overtake the traditional models, the severe bottleneck that occurs in training QNNs, which includes the QNNs with random structures, has caused low trainability, leading to the gradient and rate vanishing exponentially to the input qubit number. The applications of larger QNNs have been seriously influenced by the vanishing gradient. A previous study [5] provided a feasible solution with the utmost theoretical guarantees. In particular, the QNN with step-controlled architecture and tree tensor has the gradient that vanished the most polynomially with qubit number. QNNs have established stepcontrolled structures and tree tensors for binary classification. Simulations have reduced the convergent rates and increased the accuracy when compared with random-structure QNNs. An existing study [6] used a strong binary encoding strategy in that the user can utilize without domain knowledge of CNNs. Therefore, a new quantum-based budding strategy has been utilized to ensure the efficiency of the grown CNN. Finally, the performance of the suggested algorithm is calculated using the accuracy of the classification for the different standard datasets generally utilized in DL. From the experimental outcomes, it is shown that the suggested model has achieved optimal performance in comparison with the conventional methods.
A previous paper [7] used a spiking FFNN (feed forward neural network), which is referred to as Spiking QNN (SQNN), to address the powerful image classification process with the existence of adversarial attack and noise. SQNN has an inbuilt ability to process unanticipated noises that exist in the test images that occurred due to temporal and spatial information. A normal backpropagation algorithm was used in the variational quantum circuit to avoid spikeprop, and spiking time-dependent plasticity (STDP) is found to be inefficient in feedforward SNN (SFNN) training. SQNN is broadly tested on the PennyLane Quantum Simulator. The results show that the SQNN method outperformed the SFNN, AlexNet, ResNet-18, and Random QNN on the unseen test images, which consisted of more noise from the MNIST, CIFAR10, ImageNet, KMNIST, and fashion MNIST datasets. The main aim of another study [8] was to create a pre-convolutional neural network for evaluating the struggles of the recent CNN architectures, which attained success in accordance with classification error. The CNN model is based on two key factors of the real fashioned MNIST dataset, which is augmented by three different types of images that is different from the original one in three ways. The suggested pre-convolutional structure has some parameter numbers with less computational cost and higher accuracy in comparison with the existing strategy.
Regarding the existing method, in CNN, the convolutional layer has been improved by increasing the number of layers to three with max pooling. By using the existing network, the new dataset has been classified, which results in an enhancement in the classification performance of 0.6%, which is better than the accuracy of VGG16. In the future, a new dataset has to be applied to determine the classification methods and various convolutional structures. This proposed technique is unique from other hybrid quantum classical techniques, which usually utilize both quantum and classical computing resources to resolve the considered problem. The main benefit of the proposed technique is its ability to execute parallel computations, which results in faster and more effective classification compared to other classical models. Though existing studies have performed binary classification based on QNNs, by considering the recommended future works and to improve the accuracy to the higher rate, the present study aims to replace and leverage the traditional probability theory with the general quantum probability theory by using an effective algorithm based on a QNN in accordance with the main contributions below: • Pre-process the image by resizing it to enhance the data for flexible processing. • Quantize the layers of the proposed parallel encoded Inception module to enhance the network performance for effective classification.

•
Reveal the flexibility of DL-based quantum state computing methodologies for missing computations by creating a pipeline for de-noising, imputation, and state estimation.

•
Evaluate the performance of the proposed study with regard to hinge accuracy, hinge loss, accuracy, and loss to confirm the efficacy of this study.

Significance of Study
The proposed technique emphasizes intensifying the distinctive features of quantum computing to enhance the deep learning model performance, particularly for image classification. In addition, the utilization of entanglement and various quantum mechanics concepts allows complex computations, which are not possible with traditional approaches. In addition, the research suggests a pipeline for denoising, imputation, and state estimation that could resolve various issues related to quantum computing such as limited qubit connectivity and noise. The research also upholds the weight sharing of the kernel, which is a significant aspect in well-established deep learning models. Hence, the proposed technique delivers the optimal use of quantum computing in image classification, and it will provide enhancements in computational efficiency and accuracy compared to other previous hybrid quantum-classical models.

Paper Organization
The paper is organized as follows, with Section 1 discussing the fundamental notions of quantum computing-based neural networks for binary classification. Following this, a review of conventional works is discussed in Section 2. Then, the overall proposed system with the flow, pseudocode, and processes is presented in Section 3. The overall outcomes attained after the implementation and analysis of the proposed system are included in Section 4. Lastly, the study is concluded in Section 5 with suggestions for the future.

Review of Existing Work
Different works undertaken by existing studies for quantum-based binary classification are reviewed in this section, with problem identification.
Quantum parameterized circuits were investigated in a previous study [9] for image recognition by using a quantum deep convolutional neural network (QDCNN). Similar to the classical DCNN, the construction demonstrated the order of the quantum convolutional layer and quantum classified layer. A quantum-classical hybrid training scheme was established to update the parameters in QDCNN, drawing inspiration from the variational quantum algorithm. The analysis of network complexity showed that the existing model shows exponential acceleration compared to the traditional counterpart. Additionally, the German Traffic Sign Recognition Benchmark (GTSRB) and MNIST datasets were used for the numerical simulation, and the validity and feasibility of the model were verified by using the experimental results. Similarly, another study [2] used a quantum discriminator for binary feature extraction of data and assigned them to the appropriate class. Many binary features were obtained by using a training algorithm, and computational complexity was analyzed and explained the generalizability of the suggested model. The results of the XOR attained optimal accuracy of the classification. The quantum algorithm allowed the implementation of the model of general perception by using the qubit-based quantum register. The valued input data were analyzed continuously by the quantum artificial neuron [10][11][12]. For the classification process, the existing continuously valued quantum neuron worked better. By using phase encoding, color translational invariance and noise resilience were leveraged by using neurons. In particular, the activation function applied with the help of the quantum neuron attained 98% accuracy. The Helstrom measurementbased binary classifier gave a complete comparison between the classifier and various commonly utilized classifiers [13]. The important statistical quantities were analyzed by using each algorithm with 14 different datasets. Overall, the new algorithm performed better than the considered classifiers. Another previous study [14] used the quantum flow, which is considered the co-design framework, which connected the missing link. Quantum flow comprises a quantum friendly neural network (QF-NET), which is an automatic device (QF-Map) for generating a theoretically based execution engine, and the quantum circuit of QF-Net to support the QF-Net training on a classical computer. Additionally, rather than utilizing the traditional batch normalization, the quantum-aware batch normalization method was utilized for the QF-Net in order to attain more accuracy in DNNs. From the results, it was shown that the existing model achieved 97.01% accuracy for differentiating three and six digits in the extensively used MNIST dataset. This is nearly 14.55% more than the quantum-aware application. This case study was conducted for the application of binary classification. Processing on ibm_essex (IBM quantum processor) backend and a neural network designed by the quantum flow attained an accuracy of 82%.
A previous study [15] used QuClassi, the construction of multi-class and high-count classification with a certain number of qubits. Experiments were conducted on quantum stimulators, and the performance of IonQ and IBM-Q quantum platforms was determined by accessing Microsoft's Azure Quantum platform. From the results, it was shown that Quclassi performed better than Tensorflow quantum, quantum-based solutions, and quantum flow for multi-class and binary classification [16]. QuClassi showed better performance in comparison with the conventional DNN and attained 97.37% accuracy with fewer parameters. Likewise, another study [17] aimed to access and compare the performance of the two quantum machine learning (QML) models using datasets such as publicly available datasets, synthetic datasets, and private datasets. Pre-processing was utilized to map the data into quantum states for conducting the quantum-based classification. In particular, the method focused on enhancing the data encoding models, which were outlined by utilizing the IBM Qiskit outline. An amplitude encoding technique assisted in the enhancement of the Variational Quantum Classifier (VQC) model performance. Many experiments have been conducted by utilizing the same parameters and features with VQC, including the amplitude encoding VQC and basis of encoding VQC. An analog quantum computer was used in a previous study [18] for the quantum variational embedded classifier, in which the control signals were varied continuously based on the time and a specific focus were implemented by using the quantum annealers. Thus, in the existing algorithm, the traditional data were changed into time-varying Hamiltonian parameters through a linear transformation of the analog quantum computers. Numerical simulations were performed, which demonstrated the effectiveness of the suggested algorithm in performing the multiclass and binary classification on linear inseparable datasets such as MNIST digits and concentric circles. The recommended method performed better in comparison with the traditional linear classifiers. The performance of the classifier was increased with a higher number of qubits. The existing algorithm utilized quantum annealers to solve practical ML issues, which has been useful for the exploration of quantum advantages in QML.
The parameterized quantum circuit has acted as a basis for the hybrid quantum-classical CNN method [19] and has been used for image classification, which comprised the classical and quantum components.
The quantum convolutional layer was designed to use the parameterized quantum circuit. On the quantum state, the linear unitary transformation was performed to extract hidden information. Moreover, a pooling operation was performed based on the quantum pooling unit. The potential of the existing study has been demonstrated by using the MNIST and HQCNN datasets.
Compared with CNN, the outcomes exposed that a faster training speed was attained by using HQCNN, along with high testing set accuracy. Experimental simulation classified every binary subset present in the MNIST dataset and exposed better performance. A previous study [20] used a hybrid quantum-classical neural network for multiclass and binary classification. The existing method was used for classifying non-trivial datasets (MNIST and finance data [21]). When compared with the pure classical network, more advantages were observed in the various performance measures. Like classical ML, overfitting issues were found in the dataset. Hence, various possibilities have been explored for regularizing the network. The quantum support vector machine (QSVM) method was utilized in another previous study [22] to solve the classification issues, using the benchmark MNIST dataset. The kernel matrix and QSVM variational algorithm were applied to analyze the quantum speedup and physical processor backends. A quantum neural network with a CNN was used in a previous study [23], which utilized two-qubit interactions for the complete algorithm. In many instances, the existing QCNN attained better classification accuracy with a lower number of free parameters. The suggested QCNN algorithm employed in the present work utilizes the shallow depth and fully parameterized quantum circuits, which are appropriate for noisy intermediate scale quantum devices (NISQ) [24]. The method has been investigated for RBM training, utilizing the quantum sampling of D-wave quantum annealers from two generations. The new D-wave advantage QPU model contains more than 5000 qubits and 35,000 couplers to control the qubit connectivity and noise [25]. Therefore, in a previous study, quantum algorithms were designed for the reduction in dimensionality and to classify and connect them to provide a quantum classifier, which was tested on the MNIST dataset. The quantum classifier was stimulated, which includes the errors in quantum processes, and 98.5% accuracy was reached. Quantum classifier running time is polyalgorithmic in the number of data points and dimensions [26]. The main issues identified through the analysis of traditional works are discussed and listed as follows: • The conventional studies have had less accuracy, especially since the activation function applied by quantum neurons [10] attained an accuracy of 98%, which is used for discriminating the images of 0 and 1 from the MNIST dataset. By using quantum-Flow [14], an accuracy of 82% was obtained, and using QuClassi [15], the accuracy was found to be 97.37%.

•
In quantum-based learning fields, the low qubit representation of quantum data and the implication methods necessitate more investigations for better understanding [15].

Proposed Methodology
This research intends to propose a quantum-based neural network for the binary classification of the MNIST and fashion MNIST datasets. Though existing works have endeavored to perform this, they fell short in terms of classification rate. To attain better accuracy, this study proposes methods that follow the sequence of steps shown in Figure 1. Initially, the MNIST dataset is loaded. Then, pre-processing is undertaken. This process transforms raw data into clean data.  If pre-processing is not considered, errors in the data will prevail and reduce the outcome quality. In this phase, the images are resized. Generally, models train more quickly on small images. Moreover, an image that is twice as large needs the network to learn 4 times as many image pixels, and this time adds up. Hence, pre-processing is considered. This is fed into the train and test split. In this case, 80% of training is undertaken, while 20% of testing is performed.
Following this, quantum state mapping is performed, which is utilized to explore an extensive transformational class that a quantum system could undergo.
The standard notation utilized in quantum mechanics to denote the quantum states If pre-processing is not considered, errors in the data will prevail and reduce the outcome quality. In this phase, the images are resized. Generally, models train more quickly on small images. Moreover, an image that is twice as large needs the network to learn 4 times as many image pixels, and this time adds up. Hence, pre-processing is considered. This is fed into the train and test split. In this case, 80% of training is undertaken, while 20% of testing is performed.
Following this, quantum state mapping is performed, which is utilized to explore an extensive transformational class that a quantum system could undergo.
The standard notation utilized in quantum mechanics to denote the quantum states is called the Dirac notation. There are 3 basic forms of Dirac notation utilized in this paper, which are presented below: 1.
Bra-ket notation: In this notation, a quantum state is denoted as a ket vector |ψ , while the bra vector ψ| denotes its complex conjugate. For instance, this manuscript utilizes the notations |0 and |1 to denote the qubit basis states. The complex conjugates of qubits are represented by 0| and 1|. 2.
Density matrix notation: A density matrix ρ represents the quantum state in this notation, which is a Hermitian matrix. It also describes the quantum system probability distribution. This manuscript utilizes the notation to denote the mixed states that are not entirely in a single basis state. For instance, this manuscript utilizes the notation ρAB to denote the density matrix of a two-qubit system.

3.
Operator notation: The quantum operators are denoted as matrices that act upon quantum states in this notation. For instance, this manuscript utilizes the notations X and Z to denote the Pauli-X and Pauli-Z operators that are most frequently utilized in quantum computing. These operators work on qubits and are utilized to execute operations such as measurements and rotations.
In order to use these notations consistently and constantly, it is imperative to utilize the necessary symbols and formatting. For instance, bra-ket notation must utilize angled brackets for bra vectors and vertical bars for ket vectors; meanwhile, density matrix notation must utilize boldface font for matrices.
This research quantizes the layers corresponding to the proposed parallel encoded Inception module to improvise the network performance. It also exposes the flexibility of DL-based quantum state computational methodologies for missing computations through the pipeline creation for denoising, state estimation, and imputation. Furthermore, controlled parameterized rotations are taken into account for entanglement, which is a crucial component in quantum-based perceptron structure. On this basis, classification is performed. The overall study is assessed with regard to performance metrics to confirm its efficiency.
In classical computation, the traditional bit seems to be deterministic; it could be (0), or it could be (1). The quantum bit could be |0 or |1 , or might also remain superpositional, as given by Equation (1): In Equation (1), (α) and (β) indicate the complex numbers, and (α 2 + β 2 = 1). When the quantum state (|ϕ ) is computed, it will flop to (|0 ) with the probability α 2 and β 2 . In this case, α and β are termed as probability amplitudes for quantum states. Generally, 2 n states could be concurrently indicated by "n" qubits.
In the quantum computing environment, the impact of sequences of unitary qubit state transformations corresponds to logic gates. Hence, quantum devices that comprehend the logical conversions within a specific time are termed quantum gates. These gates are the foundation for understanding quantum computation. The quantum gates indicate computation in the quantum environment. This encompasses the quantum features.

Double-Qubit Gate
Double-qubit gates are CNOT (Controlled NOT) gates. They are also termed quantum XOR gates. They have dual inputs (|x and |y i ), each with a qubit (thus, two-qubit gate), and the operator that transforms them has to be a matrix with dimensions of (4 × 4). They are given by Equation (3): The matrix could undergo conversions such as | 00 →|00 , | 01 →|01 , | 01 →|11 , | 11 →|10 . In this study, a recently suggested distance-based classifier is extended, and an explicit quantum implementation model is afforded. To accomplish this, a certain operation The probable path to construct (R x ) involves the use of three rotations (R y (α i )), two of which are managed, wherein angles (α i ) are given as α 0 , and a 3 = arctan α 3 α 2 . The classification issues handle datasets encompassing more than dual data-points. One way to handle this relies on utilizing two qubits, which is valuable for a NISQ device. In this case, the 2nd and 3rd quantum registers possess qubit encoding. In this method, the state synthesis is less complicated, and limited qubits are needed.
It is assumed that for a 2D-quantum system, a generalized category is adopted, and this is given by Equation (4): In accordance with varied (k = parameter) values, the controlled impact comprises three cases, as given by Equation (5):

QNN (Quantum Neural Network)
A QNN is a natural advancement of a conventional neural computational system. It utilizes substantial quantum computing power to enhance the information processing ability of the neural network. Hence, QNNs afford beneficial assistance for integrating neural computation and quantum computation.
In accordance with quantum states, a quantum neuron-based model is proposed, as shown in Figure 2. The input is given by |x i , and the output is presented by state probability, wherein |1 is given by R (θ i ) and the controlled NOT gate (U(γ)) process the quantum weight gates and thresholds. The definition of (U(γ) = R (f γ )), R(k) has already been defined in Equation (4).
A QNN is a natural advancement of a conventional neural computational system. It utilizes substantial quantum computing power to enhance the information processing ability of the neural network. Hence, QNNs afford beneficial assistance for integrating neural computation and quantum computation.
In accordance with quantum states, a quantum neuron-based model is proposed, as shown in Figure 2. The input is given by |x ⟩ , and the output is presented by state probability, wherein |1⟩ is given by R (θ ) and the controlled NOT gate (U(γ)) process the quantum weight gates and thresholds. The definition of (U(γ) = R (f )), R(k) has already been defined in Equation (4). To better comprehend the association between the input and the output of quantum neurons, Equation (6) is given as In Equation (6), |x ⟩ indicates the quantized data after the input data, which are given by Equation (7): After the CNOT gate function, the above resultants are given as per Equation (8): In this study, it is considered that the probability state amplitude (|1⟩) in the quantum neuron is a result of the distinct layer and is considered as the resultant of the layer. In accordance with the principles based on quantum, the results of the individual network layer are given by Equation (9): To better comprehend the association between the input and the output of quantum neurons, Equation (6) is given as In Equation (6), |x i indicates the quantized data after the input data, which are given by Equation (7): After the CNOT gate function, the above resultants are given as per Equation (8): In this study, it is considered that the probability state amplitude (|1 ) in the quantum neuron is a result of the distinct layer and is considered as the resultant of the layer. In accordance with the principles based on quantum, the results of the individual network layer are given by Equation (9): In comparison to conventional back propagational neural networks, the present study is a 4-layered neural network. Data indicated in this study are real-numbered. Quantization rules for input data are given below.
A sample of real-value X = [x 1 , x 2 , . . . x i ] T could be transformed to |X = [|x 1 , x 2 . . .] using Equation (10) below: In accordance with the above description, mathematical expressions corresponding to the quantum neural networks are given by Equation (11): In Equation (12), i = 1, 2, . . . nandj = 1, 2, . . . m. The proposed approach is partitioned into the phases feature extraction and classification. Initially, feature extraction is performed with the use of a pre-trained Inception-V3 model. In this case, the Inception-V3 model is considered, as it utilizes various methodologies to optimize the network to attain better adaptation of the model. It possesses a deep network in comparison to Inception-V1 and Inception-V2 model. However, its speed is not compromised, and it is typically less expensive. After feature extraction, the attained score is passed to quantum learning to perform classification. Then, the 4-layered neural network is trained with a score vector in accordance with selected hyper-parameters to perform classification. X is represents the NOT Gate. The overall schematic process is illustrated in Figure 3.
In Equation (12), i = 1, 2, … nandj = 1, 2, … m. The proposed approach is partitioned into the phases feature extraction and classification. Initially, feature extraction is performed with the use of a pre-trained Inception-V3 model. In this case, the Inception-V3 model is considered, as it utilizes various methodologies to optimize the network to attain better adaptation of the model. It possesses a deep network in comparison to Inception-V1 and Inception-V2 model. However, its speed is not compromised, and it is typically less expensive. After feature extraction, the attained score is passed to quantum learning to perform classification. Then, the 4-layered neural network is trained with a score vector in accordance with selected hyper-parameters to perform classification. X is represents the NOT Gate. The overall schematic process is illustrated in Figure 3. Quantum computers, similar to traditional computers, use operational gates for regulating and modifying qubit configurations. A unitary matrix might be utilized to Quantum computers, similar to traditional computers, use operational gates for regulating and modifying qubit configurations. A unitary matrix might be utilized to explain the transformation from a single quantum state to another state through quantum gates. Qubit execution relies on the hardware framework of these quantum gates. Qubit execution possesses a unique methodology to generate quantum states. Several qubits could be utilized for running the quantum gates. Quantum circuits or a range of various quantum gates functioning on a level higher than a qubit, could be utilized for running a quantum technique. An assumption is made such that a Hadamard gate (H) functions on qubit (|0 ), resulting in a qubit within the below superposition state given in Equation (13): The overall structure of the 4-qubit methodology is shown in Figure 4. In this case, the control is shown by black dots on the schematic exploration of the CNOT gateway, whereas the target is indicated by the "x" symbol inside a circle. When the controlled qubit lies in the |1 state, the CNOT gateway will invert the quantum state of the target to ( | 0 to|1 ), and vice versa. The parametric quantum gates function in accordance with parameters that are positioned on the gates. RX, RY, and RZ are parametric gates possessing a functional matrix, as given by Equations (14)- (16): where e indicates the epochs. The X (φ), Y (φ), and Z (φ) gates rotate the qubit vector throughout the x-axis, y-axis, and x-axis. Subsequently, the qubit vector is rotated on 3 rotational axes with altering angles through the use of 3 parametric gates. Taking into account the rotational gate (φ) that has a matrix structure, its function relies on the following parameters given in Equation (17):  In this diagram, each vertical line represents a qubit. The gates are represented by labeled boxes, and the connections between the qubits are indicated by lines or wires that link the gates together. The gates themselves can be labeled according to the specific operations being applied to each qubit, such as rotation gates around the z-axis (Rz) and the y-axis (Ry), as we described earlier. Parameter tuning is performed in such a way that the circuit could undergo a unitary evolution, leading to specific intentional conclusions. The parameters considered to train the model are shown in Table 1.  Table 1 shows the learning parameters of a QNN wherein 4 quantum bits, 6 QNN layers, a learning rate of 0.001 and a batch size of 128 are used on 8 training epochs. Lastly, classification outcomes are procured by computing the quantum state. In Equation (17), w represents the weights. This gate rotates a qubit in the z-axis, then the y-axis, and lastly back to the Z-axis. The Softmax layer of Inception-V3 generates a score that is later fed into the variational quantum model. A 4-qubit framework is used to train the model. The overall visual depiction of this framework is shown in Figure 4.

Results and Discussion
In this diagram, each vertical line represents a qubit. The gates are represented by labeled boxes, and the connections between the qubits are indicated by lines or wires that link the gates together. The gates themselves can be labeled according to the specific operations being applied to each qubit, such as rotation gates around the z-axis (Rz) and the y-axis (Ry), as we described earlier. Parameter tuning is performed in such a way that the circuit could undergo a unitary evolution, leading to specific intentional conclusions. The parameters considered to train the model are shown in Table 1. Table 1 shows the learning parameters of a QNN wherein 4 quantum bits, 6 QNN layers, a learning rate of 0.001 and a batch size of 128 are used on 8 training epochs. Lastly, classification outcomes are procured by computing the quantum state.

Results and Discussion
The outcomes attained through the execution of the proposed system are discussed in this section. The dataset description, EDA (exploratory data analysis), and performance and comparative analysis outcomes are presented in this section.

Dataset Description
The present study considered the MNIST and Fashion MNIST datasets. MNIST is a classical image classifying dataset. Several researchers take MNIST as the initial choice to assess the classification rate of various algorithms. Nevertheless, images within this dataset are simple and could not completely reveal the performance of the classifier. Hence, in addition to MNIST, Fashion MNIST is also used. The MNIST dataset includes 10 categories and it is a hand-written digital gray-scale image encompassing 10,000 testing samples and 60,000 training samples. The size of individual images seems to be in the range of (28 × 28). Fashion MNIST is a gray-scale image-based dataset encompassing 70,000 images. It includes 10 categories, with the size of individual images ranging as (28 × 28).

EDA (Exploratory Data Analysis)
EDA is a strategy to assess data with the use of visualization methodologies. It is utilized for discovering patterns and trends or to validate assumptions with graphical representations and statistical summaries. First, the original MNIST image before downscaling is shown in Figure 5, while the image after downscaling is shown in Figure 6. Subsequently, the original Fashion MNIST image before normalization is shown in Figure 7, while the image after normalization is shown in Figure 8.

EDA (Exploratory Data Analysis)
EDA is a strategy to assess data with the use of visualization metho utilized for discovering patterns and trends or to validate assumptions w representations and statistical summaries. First, the original MNIST downscaling is shown in Figure 5, while the image after downscaling is sho 6. Subsequently, the original Fashion MNIST image before normalization Figure 7, while the image after normalization is shown in Figure 8.

Performance Analysis
The proposed system was assessed for the MNIST and Fashion MNIST datasets in terms of hinge accuracy, hinge loss, accuracy, and loss. The respective outcomes are discussed in this section. First, the results for the MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 9 and 10. In Figure 9, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.

Performance Analysis
The proposed system was assessed for the MNIST and Fashion MNIST datasets in terms of hinge accuracy, hinge loss, accuracy, and loss. The respective outcomes are discussed in this section. First, the results for the MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 9 and 10. In Figure 9, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.

Performance Analysis
The proposed system was assessed for the MNIST and Fashion MNIST datasets in terms of hinge accuracy, hinge loss, accuracy, and loss. The respective outcomes are discussed in this section. First, the results for the MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 9 and 10. In Figure 9, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.  From Figure 9, it can be seen that the hinge accuracy seems to vary for different epochs. However, the hinge accuracy seems to increase. In contrast, in Figure 10, it is shown that the epoch loss was found to be low for varying epochs. In which, Blue line represent the training loss and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. In addition, the Figure 9. Analysis with regard to hinge accuracy for the MNIST dataset.
The proposed system was assessed for the MNIST and Fashion MNIST datasets in terms of hinge accuracy, hinge loss, accuracy, and loss. The respective outcomes are discussed in this section. First, the results for the MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 9 and 10. In Figure 9, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.  From Figure 9, it can be seen that the hinge accuracy seems to vary for different epochs. However, the hinge accuracy seems to increase. In contrast, in Figure 10, it is shown that the epoch loss was found to be low for varying epochs. In which, Blue line represent the training loss and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. In addition, the From Figure 9, it can be seen that the hinge accuracy seems to vary for different epochs. However, the hinge accuracy seems to increase. In contrast, in Figure 10, it is shown that the epoch loss was found to be low for varying epochs. In which, Blue line represent the training loss and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. In addition, the overall accuracy and loss of the model for the MNIST dataset are shown in Figures 11 and 12.    From Figure 11, the testing accuracy of the model for the MNIST dataset seems to correlate with training accuracy. Furthermore, the testing loss of the model for the MNIST dataset also seems to correlate with training accuracy. From the analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the MNIST dataset. Additionally, the results for the Fashion MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 13 and 14. In Figure 13, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.  From Figure 11, the testing accuracy of the model for the MNIST dataset seems to correlate with training accuracy. Furthermore, the testing loss of the model for the MNIST dataset also seems to correlate with training accuracy. From the analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the MNIST dataset. Additionally, the results for the Fashion MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 13 and 14. In figure 13, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.   From Figure 11, the testing accuracy of the model for the MNIST dataset seems to correlate with training accuracy. Furthermore, the testing loss of the model for the MNIST dataset also seems to correlate with training accuracy. From the analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the MNIST dataset. Additionally, the results for the Fashion MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figures 13 and 14. In figure 13, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.  From Figure 13, it can been seen that the hinge accuracy seems to vary for different epochs. However, hinge accuracy seems to increase. In contrast, in Figure 14, it is shown that epoch loss was found to be low for varying epochs. In which, Blue line represent the training accuracy and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model .Additionally, the overall accuracy and loss of the model for the Fashion MNIST dataset are shown in Figures 15  and 16. From Figure 13, it can been seen that the hinge accuracy seems to vary for different epochs. However, hinge accuracy seems to increase. In contrast, in Figure 14, it is shown that epoch loss was found to be low for varying epochs. In which, Blue line represent the training accuracy and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. Additionally, the overall accuracy and loss of the model for the Fashion MNIST dataset are shown in Figures 15 and 16.
From Figure 15, the testing accuracy of the model for the Fashion MNIST dataset seems to correlate with training accuracy. Moreover, as shown in Figure 16, the testing loss of the model for the Fashion MNIST dataset, seems to correlate with training accuracy. From the overall analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the Fashion MNIST dataset. This reveals the better performance of the proposed system. From Figure 13, it can been seen that the hinge accuracy seems to vary for different epochs. However, hinge accuracy seems to increase. In contrast, in Figure 14, it is shown that epoch loss was found to be low for varying epochs. In which, Blue line represent the training accuracy and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model .Additionally, the overall accuracy and loss of the model for the Fashion MNIST dataset are shown in Figures 15  and 16.  From Figure 15, the testing accuracy of the model for the Fashion MNIST dataset seems to correlate with training accuracy. Moreover, as shown in Figure 16, the testing loss of the model for the Fashion MNIST dataset, seems to correlate with training accuracy. From the overall analytical outcomes, the model accuracy seems to be high and epochs. However, hinge accuracy seems to increase. In contrast, in Figure 14, it is shown that epoch loss was found to be low for varying epochs. In which, Blue line represent the training accuracy and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model .Additionally, the overall accuracy and loss of the model for the Fashion MNIST dataset are shown in Figures 15  and 16.  From Figure 15, the testing accuracy of the model for the Fashion MNIST dataset seems to correlate with training accuracy. Moreover, as shown in Figure 16, the testing loss of the model for the Fashion MNIST dataset, seems to correlate with training accuracy. From the overall analytical outcomes, the model accuracy seems to be high and

Internal Comparison
The proposed system is internally compared with regard to hinge accuracy and hinge loss for the Fashion MNIST and MNIST datasets. The attained outcomes are discussed in this section. The results are shown in Table 2, with corresponding results shown in Figures 17 and 18. model loss seems to be low for the Fashion MNIST dataset. This reveals the better performance of the proposed system.

Internal Comparison
The proposed system is internally compared with regard to hinge accuracy and hinge loss for the Fashion MNIST and MNIST datasets. The attained outcomes are discussed in this section. The results are shown in Table 2, with corresponding results shown in Figures  17 and 18.    From the analytical outcomes, the hinge accuracy for the Fashion MNIST dataset is found to be 0.9996, while the hinge accuracy for the MNIST dataset is shown to be 0.9597. On the other hand, the hinge loss for Fashion MNIST is found to be 0.0101, while the hinge loss for MNIST is shown to be 0.1655. Though the proposed system shows better performance for both datasets, performance seems to be higher for the Fashion MNIST dataset than the MNIST dataset. From the analytical outcomes, the hinge accuracy for the Fashion MNIST dataset is found to be 0.9996, while the hinge accuracy for the MNIST dataset is shown to be 0.9597. On the other hand, the hinge loss for Fashion MNIST is found to be 0.0101, while the hinge loss for MNIST is shown to be 0.1655. Though the proposed system shows better performance for both datasets, performance seems to be higher for the Fashion MNIST dataset than the MNIST dataset.

Comparative Analysis
The proposed work has been assessed through comparison with three recently published studies [27,28] to demonstrate the better performance of the proposed system than conventional works. The proposed approach utilizes the parallel encoded Inception module with quantized layers to enhance the efficiency of the network. It also integrates DL-based quantum state computational approaches for state estimation, denoising, and imputation. The proposed technique uses controlled parameterized rotations for entanglement and manages the weight sharing of the kernel.
PCA-VQC [27]: The PCA-VQC method utilizes PCA (principle component analysis) to decrease the input data dimensionality prior to training the variational quantum classifier. The training of the VQC is achieved by utilizing the optimization algorithm of classical quantum, along with the goal of reducing the loss function on a decreased dataset.
MPS-VQC [27]: The MPS-VQC method utilizes matrix product state (MPS) techniques to denote quantum states prior to training a VQC. The approach of MPS-VQC has been revealed to be more effective when compared to standard VQC techniques for various kinds of quantum data.
The proposed technique performs better than PCA-VQC and MPS-VQC with respect to the Fashion MNIST and MNIST datasets. Furthermore, VQC4 and VQC2 denote the two different versions of variational quantum classifiers. The primary difference between VQC4 and VQC2 is the number of qubits utilized in the quantum circuits, which impacts the performance and complexity of VQCs for classification tasks. This, in turn, proves that VQC4 can execute the quantum computations on four input qubits and is appropriate for classification tasks that require a higher-dimensional feature space.
The procured results are discussed in this section. Initially, a comparison is made with existing methods, namely PCA-VQC (principal component analysis-variational quantum circuits) and MPS-VQC (matrix product state-variational quantum circuits), for the MNIST dataset. The corresponding outcomes are shown in Figures 19 and 20. the performance and complexity of VQCs for classification tasks. This, in turn, proves that VQC4 can execute the quantum computations on four input qubits and is appropriate for classification tasks that require a higher-dimensional feature space.
The procured results are discussed in this section. Initially, a comparison is made with existing methods, namely PCA-VQC (principal component analysis-variational quantum circuits) and MPS-VQC (matrix product state-variational quantum circuits), for the MNIST dataset. The corresponding outcomes are shown in Figures 19 and 20.  From the analytical results, the accuracy for the existing MPS-VQC is shown to be 99.44%, while that of PCA-VQC is revealed to be 87.34%, and the proposed system has 99.97% accuracy. In contrast, the loss for MPS-VQC is shown to be 0.3183, while the proposed system shows a lower loss rate of 0.0189. Following this, a comparison is made with traditional methods, namely PCA-VQC and MPS-VQC, for the Fashion MNIST dataset. The respective outcomes are shown in Table 3, with their equivalent graphical depiction in Figures 21 and 22. Table 3. Comparative analysis with regard to accuracy and loss for the Fashion MNIST dataset.

Model
Test Accuracy Test Loss PCA-VQC2 [28] 82.10% 0.5882 PCA-VQC4 [28] 85.35% 0.4806 MPS-VQC2 [28] 95.55% 0.358 MPS-VQC4 [28] 96 From the analytical results, the accuracy for the existing MPS-VQC is shown to be 99.44%, while that of PCA-VQC is revealed to be 87.34%, and the proposed system has 99.97% accuracy. In contrast, the loss for MPS-VQC is shown to be 0.3183, while the proposed system shows a lower loss rate of 0.0189. Following this, a comparison is made with traditional methods, namely PCA-VQC and MPS-VQC, for the Fashion MNIST dataset. The respective outcomes are shown in Table 3, with their equivalent graphical depiction in Figures 21 and 22. Table 3. Comparative analysis with regard to accuracy and loss for the Fashion MNIST dataset.

Model Test Accuracy
Test Loss PCA-VQC2 [28] 82.10% 0.5882 PCA-VQC4 [28] 85.35% 0.4806 MPS-VQC2 [28] 95.55% 0.358 MPS-VQC4 [28] 96.05% 0.  From the analytical outcomes, the accuracy for the existing MPS-VQC4 is shown to be 96.05%, while that of PCA-VQC4 is revealed to be 85.35%, and the proposed system has 99.32% accuracy. In contrast, loss for MPS-VQC4 has been shown to be 0.3561, while the proposed system has a lower loss rate of 0.0157. Following this, a comparison is made with MPS-VQC for the Fashion MNIST dataset. The respective outcomes are shown in Table 4, with their equivalent graphical depiction in Figure 23. Table 4. Analysis with regard to accuracy.

Model
Accuracy MPS-VQC hybrid model [28] 96% Proposed 99.32% From the analytical outcomes, the accuracy for the existing MPS-VQC4 is shown to be 96.05%, while that of PCA-VQC4 is revealed to be 85.35%, and the proposed system has 99.32% accuracy. In contrast, loss for MPS-VQC4 has been shown to be 0.3561, while the proposed system has a lower loss rate of 0.0157. Following this, a comparison is made with MPS-VQC for the Fashion MNIST dataset. The respective outcomes are shown in Table 4, with their equivalent graphical depiction in Figure 23. In Table 4 and Figure 23, it is shown that existing models such as MPS-VQC have 96% accuracy, while the proposed system has 99.32% accuracy. Hence, from the comparative analysis, the proposed work has shown better performance in comparison to conventional works. Typically, QNNs utilize substantial quantum computing power to improvise the information processing capability of neural networks. Thus, QNNs bring benefits by integrating neural computation with quantum computation. In such a case, the Inception-V3 model is considered as an efficient model, as it makes use of various methodologies for optimizing the network so as to attain better adaptation of the model. Due to such advantages, the proposed system gained the ability to work better than the conventional system, which is confirmed through the results. From the analytical outcomes, the accuracy for the existing MPS-VQC4 is shown to be 96.05%, while that of PCA-VQC4 is revealed to be 85.35%, and the proposed system has 99.32% accuracy. In contrast, loss for MPS-VQC4 has been shown to be 0.3561, while the proposed system has a lower loss rate of 0.0157. Following this, a comparison is made with MPS-VQC for the Fashion MNIST dataset. The respective outcomes are shown in Table 4, with their equivalent graphical depiction in Figure 23. Table 4. Analysis with regard to accuracy.
In Table 4 and Figure 23, it is shown that existing models such as MPS-VQC have 96% accuracy, while the proposed system has 99.32% accuracy. Hence, from the comparative analysis, the proposed work has shown better performance in comparison to conventional works. Typically, QNNs utilize substantial quantum computing power to improvise the information processing capability of neural networks. Thus, QNNs bring benefits by

Conclusions
This study aimed to use the quantum-based Inception module to classify the MNIST and Fashion MNIST datasets. It utilizes classes 5 and 7 for the binary classification for both the Fashion MNIST and MNIST datasets. To accomplish this, the layers of the proposed parallel encoded Inception module are quantized to improvise the performance of the network for effective classification. The main contributions of this study are:

•
Quantization of the layers of the Inception module: This includes mapping the continual-valued weights in the Inception module to separate quantum states, which, in turn, allows for more effective computation and possibly higher accuracy. • Pipeline for state estimation, denoising, and imputation: The proposed technique involves a pipeline that addresses the challenges related to noise and constrained qubit connectivity in quantum computing. This pipeline involves denoising to eliminate undesired quantum noise, state estimation to deduce the quantum state of systems from noisy measurements, and imputation to fill in the missing data if qubits flop. • Controlled parameterized rotations for entanglement: The key concept of quantum mechanics is entanglement, which allows for several complex computations, which are possible even with traditional systems. The proposed technique involves controlled parameterized rotations to produce entanglement and enhance the efficiency of the quantum perceptron structure. • Weight sharing of the kernel: The proposed technique manages the concept of weight sharing of the kernel in classical deep learning models, which is significant for decreasing the quantity of parameters. Thus, it enhances the efficiency and performance of the model.
Overall, these contributions demonstrate the advanced usage of quantum computing in image classification, which emphasizes and enhances the accuracy and efficiency of existing models. However, future investigation is required to fully evaluate the performance of the proposed technique.
The flexible nature of DL-based quantum state computational methodologies was exposed to missing computations through the creation of a pipeline for imputation, denoising, and state estimation. The proposed study was evaluated in accordance with hinge accuracy, hinge loss, accuracy, and loss. Moreover, a comparison was undertaken with three recent conventional studies for the MNIST and Fashion MNIST datasets. From the outcomes, it was found that the proposed system achieved 99.32% accuracy for the Fashion MNIST dataset, while 99.97% accuracy was attained for the MNIST dataset. The loss rate was also found to be lower for the proposed system in comparison to the conventional system. Hence, the outcomes confirmed the better performance of the proposed work. In the future, quantum architecture can be modified and time complexity must be decreased.