top of page
Nikhil Adithyan

Stock Price Prediction with Quantum Machine Learning in Python

Updated: Jun 9

An overview of the challenges and opportunities



Intro: Stock Price Prediction, Machine Learning, and Quantum ML

Today, we’re diving into the intersection of quantum computing and machine learning, exploring quantum machine learning and its application in stock price prediction. Our main goal is to compare the performance of a quantum neural network for stock price time series forecasting with a simple single-layer MLP.


To facilitate this project, we’ll be utilizing the Historical API endpoint offered by Financial Modeling Prep (FMP) for reliable and accurate data which is very critical. With that being said, let’s dive into the article.


Importing the Data

Let’s start by importing the necessary libraries for our analysis. These libraries will provide the basic tools required to explore and implement our project.



import numpy as np
import pandas as pd
import requests
import json
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.metrics import mean_squared_error
from qiskit import QuantumCircuit
from qiskit.circuit.library import PauliFeatureMap
from qiskit.algorithms.optimizers import ADAM
from qiskit.circuit import Parameter
from qiskit.primitives import Sampler

We’ve set up our environment by installing the Qiskit library for working with quantum computing networks, along with other essential libraries. To extract the data, we’ll use the historical data API endpoint provided by Financial Modeling Prep.


FMP’s historical data API offers a conveniently accessible endpoint, providing a diverse and extensive collection of historical stock data that proves invaluable at every step of our project. This resource enables us to access a wide range of financial information, enhancing the depth and accuracy of our analysis. Its user-friendly interface and comprehensive dataset contribute significantly to the success and efficiency of our research and implementation.


Now we are going to extract historical data as follows:



api_url = "https://financialmodelingprep.com/api/v3/historical-price-full/AAPL?apikey=YOUR API KEY"

# Make a GET request to the API
response = requests.get(api_url)

# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Parse the response JSON
    data = response.json()
else:
    print(f"Error: Unable to fetch data. Status code: {response.status_code}")

data

Replace "YOUR API KEY" with your secret API key which you can obtain by creating an FMP account. The output is a JSON response which looks as follows:



Introduction to quantum computing

In regular computers, we have tiny switches called “digital gates.” These switches control how information moves around. They work with basic units of data called “bits,” which can be either 0 or 1. The gates help computers do calculations and process stuff. Now, in quantum computers, we use something called “qubits” instead of bits. Qubits are special because they can be both 0 and 1 at the same time. It’s like having a coin that’s spinning and showing both heads and tails until you catch it, and then it picks one side.


When we say the “wave function collapses,” it’s just a fancy way of saying the qubit decides to be either 0 or 1 when we check it. We make these qubits using different things like light particles (photons), tiny particles that make up stuff (atoms), or even small electrical circuits (Josephson junctions). These are like the building blocks for our special qubits.


These quantum systems (particles or circuits) do some interesting things. They can be in different states at the same time (superposition), connect in a special way (entanglement), and even go through barriers they shouldn’t (tunneling).


What’s cool is that quantum computers, with their qubits and special behaviors, use certain algorithms to solve some problems faster than regular computers. It’s like having a new tool that might help us solve tough puzzles more efficiently in the future.


Operators in Quantum Computing

In traditional computing, we perform operations using basic logic gates like AND, NOT, and OR. These gates work with 0s and 1s, and their rules are based on a simple mathematical system called:



which essentially deals with counting modulo 2.


Now, imagine a quantum computer — it also has gates, but these are like supercharged versions. Instead of dealing with simple bits, quantum gates work with quantum bits or qubits. The math behind these quantum gates involves complex numbers and Matrix operations.

Let’s take the quantum NOT gate, called:



as an example. Apply it to a qubit initially in the state ∣0⟩, and the operator flips it to ∣1⟩ , and if you apply it again, it goes back to ∣0⟩. It’s a bit like flipping a coin.



There’s also the Hadamard gate (H) that does something really cool. Applying it to a qubit initially in the state ∣0⟩ puts it in this special mix of 0 and 1 states at the same time to show mathematically H operates on |0⟩ and converts it into the standard superposition of the basis states:



It’s like having a coin spinning in the air, showing both heads and tails until it lands.



Now, let’s talk about the Controlled-NOT (CNOT) gate. This one works on two qubits. If the first qubit is ∣1⟩, it flips the second qubit from ∣0⟩ to ∣1⟩ or vice versa. It’s like a quantum switch that depends on the state of the first qubit.


In the quantum world, things get more interesting. If you have two qubits in a special state, the CNOT gate uniquely rearranges their combinations, creating what we call entanglement. This entanglement is like a special connection between the qubits, making them behave in a coordinated manner.



So, in a nutshell, while regular computers use basic rules with 0s and 1s, quantum computers have these fascinating gates that play with probabilities, mix states, and create connections between qubits, opening up a world of possibilities for solving complex problems more efficiently.


In our project, we place special emphasis on a category of gates known as parameterized gates. These gates exhibit behavior that is contingent on specific input parameters, denoted by the symbol θ. Notably, we focus on rotation gates such as:



each characterized by a unitary matrix as described in the below figure:



Let’s delve a bit deeper into these rotation gates. Consider:



envision it as a quantum gate resembling a rotating door that allows for the rotation of a qubit by a specific angle θ. The



and,



gates function similarly, introducing rotations around different axes.

The significance of these gates lies in their parameterized nature. By adjusting the input parameter θ, we essentially introduce a customizable element into our quantum algorithms. These gates serve as the foundational components for constructing the quantum neural network integral to our Project.


In essence, θ acts as a tuning parameter, akin to a knob, enabling us to finely adjust and tailor the behavior of our quantum algorithms within the framework of the quantum neural network. This flexibility becomes pivotal in optimizing and customizing the performance of our quantum algorithms for specific tasks.


Quantum Circuits

Quantum algorithms can be thought of as a series of operations performed on a quantum state, represented by expressions like:



These algorithms are translated into quantum circuits, as illustrated in Figure below. In this depiction, the algorithm starts from the initial state |q_0 q_1⟩ = |00⟩ and concludes with a measurement resulting in either |00⟩ or |11⟩ with an equal probability of 0.5, recorded into classical bits (line c).



In a quantum circuit, each horizontal line corresponds to a single qubit, and gates are applied sequentially until measurement. It’s important to note that loops are not allowed in a quantum program. A specific type of quantum circuit is the Variational Quantum Circuit (VQC). Notably, VQC incorporates parameterized gates like the aforementioned R_x(θ), R_y(θ), R_z(θ).


In simpler terms, quantum algorithms are like step-by-step instructions for a quantum computer, and quantum circuits visually represent these steps. The Variational Quantum Circuit introduces a special kind of flexibility with parameterized gates, allowing for customization based on specific values, denoted by θ.


Quantum Machine Learning

The primary objective of QML is to devise and deploy methods capable of running on quantum computers to address conventional supervised, unsupervised, and reinforcement learning tasks encountered in classical Machine Learning.


What makes QML distinct is its utilization of quantum operations, leveraging unique features like superposition, tunneling, entanglement, and quantum parallelism inherent to Quantum Computing (QC). In our study, we specifically concentrate on Quantum Neural Network (QNN) design. A QNN serves as the quantum counterpart of a classical neural network.


Breaking it down, each layer in a QNN is a Variational Quantum Circuit (VQC) comprising parameterized gates. These parameters act as the quantum equivalents of the weights in a classical neural network. Additionally, the QNN incorporates a mechanism to exchange information among existing qubits, resembling the connections between neurons in different layers of a classical network. Typically, this information exchange is achieved through entanglements, employing operators such as the CNOT gate.



Creating a Quantum Machine Learning (QML) model typically involves several steps, as illustrated in Figure above. First, we load and preprocess the dataset on a classical CPU. Next, we use a quantum embedding technique to encode this classical data into quantum states on a Quantum Processing Unit (QPU) or quantum hardware. Once the classic data is represented in quantum states, the core model, implemented in the ansatz, is executed, and the results are measured using classical bits. Finally, if needed, we post-process these results on the CPU to obtain the expected model output. In our study, we follow this overall process to investigate the application of a Quantum Neural Network for time series forecasting.


Quantum Neural Network

A Quantum Neural Network (QNN) typically consists of three main layers:


1. Input Layer: This layer transforms classical input data into a quantum state. It uses a parameterized variational circuit with rotation and controlled-rotation gates to prepare the desired quantum state for a given input. This step, known as quantum embedding, employs techniques like basis encoding, amplitude encoding, Hamiltonian encoding, or tensor product encoding.


2. Ansatz Layer: The heart of the QNN, this layer contains a Variational Quantum Circuit, repeated L times to simulate L network layers classically. It’s responsible for processing and manipulating quantum information.


3. Output Layer: This layer performs measurements on qubits, providing the final expected outcome.


For the input layer, we use a tensor product encoding technique. It involves a simple X-rotation gate for each qubit, where the gate parameter is set by scaling the classic data to the range [-π, π]. Although it’s a quick and straightforward encoding method (O(1) operations), it has limitations. The number of qubits needed scales linearly with the input classic data. To address this, we introduce learnable parameters for scaling and bias in the input data, enhancing the flexibility of the quantum embedding. In Figure 3, you can see an example of the input layer for a network with 3 qubits, where classic data features:


, input scale parameters:


, bias parameters:


come into play.



Regarding the ansatz, it’s interesting to note that, unlike classical neural networks, there isn’t a fixed set of quantum layer structures commonly found in the literature (such as fully connected or recurrent layers). The realm of possible gates for quantum information transfer between qubits is extensive, and the optimal organization of these gates for effective data transfer is an area that hasn’t been thoroughly explored yet.


In our Project, we adopt the Real Amplitudes ansatz, a choice inspired by its success in various domains like policy estimation for quantum reinforcement learning and classification. This ansatz initiates with full rotation X/Y/Z parameterized gates, akin to the quantum version of connection weights. It is then followed by a series of CNOT gates arranged in a ring structure to facilitate qubit information transfer. Figure 4 provides a visual representation of how this ansatz is implemented, serving as the quantum equivalent of a network layer for a 3-qubit network.


To break it down, a quantum network layer in our work involves a set of parameters totaling 3 times the number of qubits (3*n), where ’n’ represents the number of qubits in the quantum network.



Now, let’s talk about the output layer, which is a critical part of our quantum model. In quantum computing, when we want to extract information from our quantum state, we often perform measurements using a chosen observable. One such commonly used observable is represented by the σ_z operator over the computational basis. To understand this, think of it as a way to extract information from our quantum state.



The network output is determined by calculating the expectation of this observable over our quantum state. This is expressed as ⟨ψ|σ_z|ψ⟩, where ⟨ψ| denotes the complex conjugate of |ψ⟩. The result falls within the range of [-1, 1].


No need to stress over those complex mathematical equations — our trusty library, Qiskit, has got it covered! Qiskit will handle all the intricate quantum calculations seamlessly, making the quantum computing process much more accessible for us. So, you can focus on exploring the quantum world without getting bogged down by the nitty-gritty math


Now, to make our network output less sensitive to biases and scales inherent in the dataset, we introduce a final scale parameter and bias to be learned. This step adds a layer of adaptability to our model, allowing it to fine-tune and adjust the output based on the specific characteristics of our data. The entire model architecture is visually represented in the figure below.



The training of our proposed Quantum Neural Network (QNN) happens on a regular CPU using classical algorithms like the Adam optimizer. The CPU handles the gradient computation through traditional propagation rules, while on the Quantum Processing Unit (QPU), we calculate the gradient using the parameter-shift rule. It’s a bit like having a dual system where the CPU manages the main training, and the QPU comes into play for specific quantum computations.


Visualize the training process pipeline in Figure 6, where θ¹ represents the scale/bias parameters in the input layer, θ² corresponds to the parameters of the layers containing the ansatz, and θ³ are the scale/bias parameters for the network outputs. This orchestration ensures a cohesive training approach, leveraging both classical and quantum computing resources.



As a Quantum Neural Network (QNN) operates as a feedforward model, our initial step involves defining a time horizon, denoted as T. To adapt the time series data for the QNN, we transform it into a tabular format. Here, the target is the time series value at time t, denoted as x(t), while the inputs encompass the values x(t-1), x(t-2), …, x(t-T). This restructuring facilitates the model’s understanding of the temporal relationships in the data, allowing it to make predictions based on past values.


Forming QNN with Qiskit


Extracting data & Data Preprocessing

First, we fetch the data using the historical Data API endpoint provided by Financial Modeling Prep as follows:



api_url = "https://financialmodelingprep.com/api/v3/historical-price-full/AAPL?apikey=YOUR API KEY"

# Make a GET request to the API
response = requests.get(api_url)

# Check if the request was successful (status code 200)
if response.status_code == 200:    
	# Parse the response JSON    
	data = response.json()
else:    
	print(f"Error: Unable to fetch data. Status code: {response.status_code}")

# convert into a datframe
df = pd.json_normalize(data, 'historical', ['symbol']) 
df.tail()

The output is a Pandas dataframe which looks something like this (before that, make sure to replace YOUR API KEY with your secret API key):



From this plethora of data, we are going to use open price as our temporal variable and we will work with 500 data points each representing daily open prices, and our window size for prediction would be 2.



final_data = df[['open', 'date']][0:500] # forming filtered dataframe
input_sequences = []
labels = []

# Creating input and output data for time series forecasting
for i in range(len(final_data['open'])):    
	if i > 1:        
		labels.append(final_data['open'][i])        
		input_sequences.append(final_data['open'][i-2:i+1].tolist())        

# creating train test split
x_train = np.array(input_sequences[0:400])
x_test = np.array(input_sequences[400:])
y_train = np.array(labels[0:400])
y_test = np.array(labels[400:])

Now, let’s plot the data we acquired.



import matplotlib.pyplot as plt
plt.style.use('ggplot')

# Convert the 'date' column to datetime format
df['date'] = pd.to_datetime(df['date'])

# Plotting the time series data
plt.figure(figsize=(10, 6))
plt.plot(df['date'][0:500], df['open'][0:500], marker='o', linestyle='-')

# Adding labels and title
plt.xlabel('date')
plt.ylabel('open')
plt.title('Time Series Data')

# Display the plot
plt.grid(True)plt.show()

Following is the output:



Quantum Neural Network


Pauli Feature Map:



num_features =  2
feature_map = PauliFeatureMap(feature_dimension = num_features, reps = 2)
optimizer = ADAM(maxiter = 100)

In our approach, we employ the Pauli Feature Map to encode our data into qubits, specifically leveraging 2 features. The encoding circuit is structured as follows:



Furthermore, for optimizing our model, we utilize the ADAM optimizer. This choice helps fine-tune the parameters of the quantum neural network, enhancing its overall performance.


Quantum Circuit:



def ans(n, depth):    
	qc = QuantumCircuit(n)    
	for j in range(depth):        
		for i in range(n):            
			param_name = f'theta_{j}_{i}'            
			theta_param = Parameter(param_name)            
			qc.rx(theta_param, i)            
			qc.ry(theta_param, i)            
			qc.rz(theta_param, i)    
	for i in range(n):        
		if i == n-1:            
			qc.cnot(i, 0)        
		else:            
			qc.cnot(i, i+1)   
	return qc

This function initializes a quantum circuit with the number of qubits equal to the number of features. In the first loop, it appends rotation gates to the circuit, each with parameterized rotational angles. The second loop adds CNOT gates to the circuit. For each iteration, a CNOT gate is appended with the current qubit as the control (0th qubit) and the target qubit determined by the loop index. Additionally, another CNOT gate is appended with the current qubit as the control and the next qubit as the target.


This process constructs the ansatz for our quantum circuit, essentially creating the quantum neural network structure we defined earlier. The diagram for this circuit has been presented previously for reference.


Initializing the Anstaz circuit:



ansatz = ans(num_features, 5) # anstaz(num_qubits=num_features, reps=5)

# creating train test split
x_train = np.array(input_sequences[0:400])
x_test = np.array(input_sequences[400:])
y_train = np.array(labels[0:400])
y_test = np.array(labels[400:])

Now, we proceed to create a Variational Quantum Circuit (VQC) that functions as a neural network. In this circuit, we incorporate both the ansatz, which defines the structure of our quantum neural network, and the encoding for our features. For this purpose, we import the VQC module from qiskit_machine_learning.classifiers. The VQC encapsulates the quantum processing aspects of our neural network, and its integration with Qiskit simplifies the implementation of quantum machine learning algorithms.


VQC:



vqr = VQR(    
	feature_map = feature_map,    
	ansatz = ansatz,    
	optimizer = optimizer,)

vqr.fit(x_train,y_train)
vqr_mse = mean_squared_error(y_test, vqr.predict(x_test))

# Calculate root mean squared error
vqr_rmse = np.sqrt(vqr_mse)

In the final step, we fit the Variational Quantum Circuit (VQC) to our features. This involves training the quantum neural network on our dataset. Once fitted, we assess the performance by calculating the mean and root mean errors. This evaluation step helps us gauge the accuracy and effectiveness of our Variational Quantum Circuit in handling the given features and predicting outcomes.


Classical neural network

Now, let’s construct a straightforward classical neural network to serve as a benchmark for comparison with the quantum neural network. Our chosen architecture will be a simple Multilayer Perceptron (MLP) featuring a single hidden layer equipped with 64 nodes. This classical neural network will provide us with a reference point against which we can evaluate the performance of the quantum neural network.


Classical ANN:



model = Sequential()
model.add(Dense(64,activation = 'relu', input_shape = (4,)))
model.add(Dense(1))

model.compile(optimizer = 'adam', loss = 'mean_squared_error')

model.fit(x_train, y_train, epochs = 20, batch_size = 32, validation_data = (x_test,y_test))

loss = model.evaluate(x_test, y_test)
prediction = model.predict(x_test)

ann_mse = mean_squared_error(y_test, prediction.flatten())
ann_rmse = np.sqrt(ann_mse)

Result and Comparison

The following are the results:


  1. QNN: < 3.5 RMSE

  2. ANN: > 3.6 RMSE


In this evaluation, it is evident that the Quantum Neural Network (QNN) has shown promise by outperforming the Artificial Neural Network (ANN). Nevertheless, it’s crucial to acknowledge that the Root Mean Squared Error (RMSE) values obtained may fall short of meeting the desired level of satisfaction. The primary objective of this experiment was to spotlight the potential of quantum computing, showcasing its ability to generate superior results and construct models applicable for commercial use.


This study anticipates that as quantum computers continue to advance, attaining a heightened level of robustness for effectively training these models, the technology will progressively become more practical for real-world applications in the near future.


Building upon these findings, it becomes evident that Quantum Neural Networks (QNN) hold promise for further development and practical implementation. While current benchmarks in time series forecasting may currently outshine QNN, there exists significant potential for improvement. Addressing the current limitations is foreseeable, especially with ongoing advancements in quantum computing. Successfully developing robust quantum computers could unlock the door to achieving even more superior results in the realm of time series forecasting.


NOTE: I’ve presented a basic demonstration of both a Quantum Neural Network (QNN) and a Classical Neural Network (CNN) to illustrate their construction and highlight differences in outcomes. It’s important to note that discrepancies may arise in QNN, so feel free to adapt the provided examples to better suit your specific use case. Adjust the code and parameters accordingly to optimize performance and address any challenges that may arise in the implementation of Quantum Neural Networks for your particular application.


Conclusion

In summary, this project explored quantum computing and the creation of neural networks, highlighting their potential for future growth. As technology advances, there is an opportunity to develop more sophisticated quantum machine learning algorithms with quantum computers. These systems can significantly reduce training times, benefitting from the efficiency of qubits and enabling the use of a greater number of qubits in circuit formation. This progress opens doors to enhanced computational power and problem-solving capabilities, indicating a promising path for the future of quantum computing in machine learning applications.


With that being said, you’ve reached the end of the article. Hope you learned something new and useful today. Also, let me know in the comments about your take on Quantum Machine Learning and its impact on the stock market sector. Thank you for your time.

Related Posts

See All

Bring information-rich articles and research works straight to your inbox (it's not that hard). 

Thanks for subscribing!

© 2023 by InsightBig. Powered and secured by Wix

bottom of page