Supervised Learning

Introduction

This page serves to illustrate the description and technical details of the supervised machine learning section. Supervised learning is the technique of utilizing labelled datasets to train algorithms for the purposes of classification or prediction of data. We will employ principal component analysis for the purpose of dimensionality reduction and feature selection, followed by regression analysis, binary analysis and multivariate analysis. The objective is to assess the correlation between the cryptocurrency market and the FX market and to attempt to predict the data, thereby providing analytical support and a reference point for market research and decision-making.

Data Preprocessing

  • Standardization: To ensure the integrity of the analysis and enhance model performance, all feature values were standardized, thereby eliminating potential distortions resulting from differences in data ranges. The data were normalized to have a mean of 0 and a standard deviation of 1 using StandardScaler.

  • Feature Selection: In order to ascertain the most appropriate features for analysis, we selected and extracted different features from the datasets. The features used were either weekly returns or daily returns of individual cryptocurrencies, which were employed as input features, and the average volatility of the FX market, which was used as a prediction target. Furthermore, principal component analysis techniques were employed, which are effective in unsupervised learning, to extract the principle components and automatically select the optimal number with the best performance by traversing them in the model training and evaluation.

  • Encoding Categorical Variables: In conducting the binary and multivariate categorical analysis, we employed a coding system for the fluctuations of the foreign exchange market, categorizing these movements as discrete variables. In the case of binary categorization, a value of 1 was assigned to indicate an upward movement, while a value of 0 was assigned to indicate a downward movement. This approach was validated through the EDA section, which confirmed the absence of flatness. With regard to the multivariate categorization, the volatility ranges were classified based on quartiles. This resulted in the following assignments: 0-25% was classified as a larger downward movement, expressed as 0; 25-75% was classified as a smoother upward or downward movement, expressed as 1; and 75-100% was classified as a larger upward movement, expressed as 2.

Model Selection

We experiment with different models for three different supervised learning methods.

Regression Analysis

  • Linear regression: the fundamental premise of the linear regression algorithm is to establish a linear relationship between the dependent and independent variables, which is particularly suited to the initial modelling of linear data. Despite the absence of a statistically significant linear relationship in the EDA section for the market as a whole, there may still be merit in employing this approach for modelling individual FX data.

  • Ridge regression: addresses the multicollinearity issue by incorporating an L2 regularization term into the ordinary least squares (OLS) regression, thereby enhancing the model’s generalization capacity and resilience. We consider it as a suitable approach for feature screening in high-dimensional datasets like ours.

  • Lasso regression: a model that is analogous to ridge regression. It introduces L1 regularization, which can be regarded as a kind of automated feature selection, and is also adapted to our high-dimensional data.

  • Random Forest Regressor: predicts dependents by constructing a decision tree and averaging the values, which can effectively handle non-linear and high-dimensional data, thus aligning with the characteristics of our dataset.

  • XGBoost Regressor: an integrated learning algorithm that employs a multiple decision tree approach to gradually approximate the true value of the target variable. The algorithm employs a gradient enhancement technique to minimize the error function and incorporates a regularization term to regulate the complexity of the model, rendering it well-suited for datasets exhibiting intricate nonlinear relationships. Consequently, we selected to utilize this model.

Binary and Multivariate Classification Analysis

  • Logistic regression: the most commonly used algorithm for predicting the probability of binary categorical data and can also be extended to multivariate classification with easily interpretable resulting probabilities, which led to our adoption here.

  • Support Vector Machine (SVM): a machine learning algorithm that addresses the challenge of complex boundaries through the use of kernel tricks. It is capable of solving high-dimensional problems and is appropriate for use with small samples. Furthermore, it is able to handle non-linear feature interactions, and thus is deemed suitable for application to the dataset in this project. In terms of multivariate situation, it can also be employed to address complex classification problems using a nonlinear kernel.

  • Random Forest Classifier: See above.

  • XGBoost Classifier: See above.

Training and Testing Strategy

  • Split Method: The train_test_split function is employed for the purpose of data splitting, with the objective of ensuring that the data splitting process is randomized and that it does not affect the validity of the model.

  • Data Proportion: The data was divided with 80% allocated to the training set and the remaining 20% designated as the test set.

Model Evaluation Metrics

The metrics selected for the various analysis were based on the specific modeling techniques and objectives employed:

Regression analysis

  • Mean squared error (MSE): This metric gauges the average squared difference between the predicted and actual values. A lower MSE indicates superior model performance.

  • R² coefficient of determination: This metric indicates the proportion of variance explained by the model. A value approaching 1 indicates a superior model performance.

Binary analysis

  • Accuracy: This calculation determines the proportion of samples that are correctly classified. It is used to assess how close a measurement is to the true or accepted value.

  • Precision: This metric determines the proportion of correctly predicted samples within the positive category, indicating how closely measurements of the same item are to one another.

  • Recall: Determines the proportion of actual positives correctly identified.

  • F1 score: a harmonic mean of the precision and recall, with a value of 1 indicating optimal performance.

  • ROC-AUC (Receiver Operating Characteristic Area Under the Curve): A metric for evaluating the model’s capacity to differentiate between positive and negative samples. A value approaching 1 indicates an optimal model performance, whereas a value below 0.5 suggests a predictive capacity that is less than that of a random model.

Multivariate analysis

  • Confusion matrix: a two-dimensional matrix that summarizes prediction results across classes.

  • Macro and Micro Average: A measure of the efficacy of a model in the context of class imbalance. The macro-average assigns equal weight to each class, whereas the micro-average assigns equal weight to each instance.

Regression Analysis

We initially attempted to fit models to the overall weekly volatility of the foreign exchange market. After experimenting with various combinations of principal component numbers, models, and model parameters, we determined that the optimal model was the XGBoost Regressor. However, its R-squared value was only 0.259, indicating suboptimal model performance. Consequently, we concluded that the individual currencies within the foreign exchange market may exhibit significant discrepancies, rendering an accurate overall weekly volatility forecast challenging. Subsequently, an attempt was made to forecast individual FX volatility, with the objective of identifying the best forecasting model and assessment for each FX. The volatility forecasts for the Japanese Yen (JPY=X), Ringgit (MYR=X), South African Rand (ZAR=X), Indonesian Rupee (IDR=X), and Mexican Peso (MXN=X) demonstrate superior performance, with R-squares exceeding 0.3, and for the Japanese Yen, achieving an R-squared value of 0.52. Consequently, the models for these five currencies were selected for evaluation, and the corresponding error distributions, residuals, and feature importance plots were generated.

# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, Lasso
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from itertools import product

# Load weekly data
crypto_weekly = pd.read_csv('../../data/processed-data/weekly_crypto_returns.csv')
fx_weekly = pd.read_csv('../../data/processed-data/weekly_fx_rates.csv')

# Combine datasets
crypto_weekly.set_index('Date', inplace=True)
fx_weekly.set_index('Date', inplace=True)
combined_weekly = pd.concat([crypto_weekly, fx_weekly], axis=1).dropna()

# Standardize the crypto data for PCA
scaler = StandardScaler()
crypto_scaled = scaler.fit_transform(crypto_weekly)

# Define models and parameters
model_params = {
    "Ridge Regression": (Ridge, {"alpha": [0.01, 0.1, 1, 10]}),
    "Lasso Regression": (Lasso, {"alpha": [0.01, 0.1, 1, 10]}),
    "Random Forest Regressor": (RandomForestRegressor, {
        "n_estimators": [100, 200, 500], 
        "max_depth": [None, 5, 10, 20]
    }),
    "XGBoost Regressor": (XGBRegressor, {
        "n_estimators": [100, 200, 500], 
        "learning_rate": [0.01, 0.1, 0.2], 
        "max_depth": [3, 5, 7, 10]
    })
}

# Track best results
best_overall_result = {"Model": None, "PCA": None, "Params": None, "R²": -np.inf}

# PCA and model tuning
for n_components in range(2, 16):
    # Apply PCA
    pca = PCA(n_components=n_components)
    crypto_pca = pca.fit_transform(crypto_scaled)

    pca_df = pd.DataFrame(crypto_pca, columns=[f'PC{i+1}' for i in range(n_components)])
    pca_df['Target'] = fx_weekly.mean(axis=1).values

    # Split data
    X = pca_df.drop(columns=['Target'])
    y = pca_df['Target']
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Model training and evaluation
    for model_name, (model_class, param_grid) in model_params.items():
        param_combinations = list(product(*param_grid.values()))

        for params in param_combinations:
            model = model_class(**dict(zip(param_grid.keys(), params)))
            model.fit(X_train, y_train)
            y_pred = model.predict(X_test)
            mse = mean_squared_error(y_test, y_pred)
            r2 = r2_score(y_test, y_pred)

            if r2 > best_overall_result['R²']:
                best_overall_result.update({
                    "Model": model_name,
                    "PCA": n_components,
                    "Params": dict(zip(param_grid.keys(), params)),
                    "R²": r2
                })

            print(f"PCA: {n_components} | Model: {model_name} | Params: {dict(zip(param_grid.keys(), params))} | MSE: {mse:.4f} | R²: {r2:.4f}")

# Print best result
print(f"\nBest Model: {best_overall_result['Model']} | PCA Components: {best_overall_result['PCA']} | Params: {best_overall_result['Params']} | R²: {best_overall_result['R²']:.4f}")

# Visualization
best_model = model_params[best_overall_result['Model']][0](**best_overall_result['Params'])
best_pca = PCA(n_components=best_overall_result['PCA'])
crypto_pca_best = best_pca.fit_transform(crypto_scaled)

best_pca_df = pd.DataFrame(crypto_pca_best, columns=[f'PC{i+1}' for i in range(best_overall_result['PCA'])])
best_pca_df['Target'] = fx_weekly.mean(axis=1).values

X_best = best_pca_df.drop(columns=['Target'])
y_best = best_pca_df['Target']
X_train_best, X_test_best, y_train_best, y_test_best = train_test_split(X_best, y_best, test_size=0.2, random_state=42)
best_model.fit(X_train_best, y_train_best)
y_pred_best = best_model.predict(X_test_best)

plt.scatter(y_test_best, y_pred_best, alpha=0.6)
plt.title(f"Best Model: {best_overall_result['Model']} (PCA = {best_overall_result['PCA']})")
plt.xlabel("True Values")
plt.ylabel("Predictions")
plt.grid(True, linestyle='--', alpha=0.7)
plt.show()
PCA: 2 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.0640
PCA: 2 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.0640
PCA: 2 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.0645
PCA: 2 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.0692
PCA: 2 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 2 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 2 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 2 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0825
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1015
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0517
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0940
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: -0.0865
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0221
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0217
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0544
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0697
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0184
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0486
PCA: 2 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0257
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0740
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1650
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1524
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2058
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2061
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2007
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0862
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0946
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3520
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2770
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0944
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1175
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0202
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1088
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1111
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1457
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2662
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2006
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0861
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0946
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3520
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2770
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0944
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1175
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0693
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1657
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0975
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1008
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2662
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2006
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0861
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0946
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3520
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2770
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0944
PCA: 2 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1175
PCA: 3 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1507
PCA: 3 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1504
PCA: 3 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1477
PCA: 3 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1283
PCA: 3 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 3 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 3 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 3 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0121
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0017
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0739
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0170
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0412
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0064
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0361
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0488
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0378
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0461
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0335
PCA: 3 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0171
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1084
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1284
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1851
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1398
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1920
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0268
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0988
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0255
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1609
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0077
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0815
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0143
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1290
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1205
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1622
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0890
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1930
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0269
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0988
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0255
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1609
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0077
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0815
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0143
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2119
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1013
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0968
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0099
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1930
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0269
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0988
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0255
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1609
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0077
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0815
PCA: 3 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0143
PCA: 4 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.2269
PCA: 4 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.2264
PCA: 4 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.2223
PCA: 4 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1908
PCA: 4 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 4 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 4 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 4 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.1189
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0732
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0290
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0351
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0103
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0434
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0903
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0240
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0151
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0275
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0379
PCA: 4 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0405
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0730
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0771
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0496
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0379
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3679
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0374
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0365
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1056
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2309
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0953
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0258
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0954
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0783
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0597
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0280
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0320
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3655
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0374
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0365
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1056
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2309
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0953
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0258
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0954
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2084
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0277
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0336
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1065
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.3655
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0374
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0365
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1056
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.2309
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0953
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0258
PCA: 4 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0954
PCA: 5 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.2340
PCA: 5 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.2334
PCA: 5 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.2276
PCA: 5 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1848
PCA: 5 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 5 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 5 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 5 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.0398
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0423
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0427
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0632
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0374
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0126
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1320
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0422
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0931
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0535
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0658
PCA: 5 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0578
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0615
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0043
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0485
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0499
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1166
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0759
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0668
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1345
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0230
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0167
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0677
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0640
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0076
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0469
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0762
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0638
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1096
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0759
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0668
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1345
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0230
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0167
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0677
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0640
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0334
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1026
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0319
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1363
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1096
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0759
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0668
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1345
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0230
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0167
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0677
PCA: 5 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0640
PCA: 6 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1659
PCA: 6 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1654
PCA: 6 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1610
PCA: 6 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1298
PCA: 6 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 6 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 6 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 6 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.1040
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1093
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0707
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0075
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.1102
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0635
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1510
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.1326
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0989
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0898
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1106
PCA: 6 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.1037
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0556
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0178
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0739
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0397
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1492
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1839
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0135
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0466
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0056
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.2590
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0325
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0826
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0295
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0559
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0666
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0306
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1411
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1839
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0135
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0466
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0056
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.2590
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0325
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0826
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0967
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1428
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0231
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0525
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1411
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1839
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: 0.0135
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0466
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0056
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.2590
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0325
PCA: 6 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0826
PCA: 7 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.0972
PCA: 7 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.0969
PCA: 7 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.0939
PCA: 7 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.0745
PCA: 7 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 7 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 7 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 7 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.0814
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0476
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0041
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0972
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0644
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0382
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0651
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0222
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0615
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0612
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0765
PCA: 7 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0706
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0614
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0441
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0749
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0319
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0434
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0417
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1236
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0235
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0684
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0592
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0512
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0018
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0066
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1068
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1003
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0566
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0496
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0417
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1236
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0234
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0684
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0592
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0512
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0018
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0460
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.1395
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0715
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0226
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0496
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0417
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1236
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0234
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0684
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0592
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0512
PCA: 7 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0018
PCA: 8 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.0715
PCA: 8 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.0712
PCA: 8 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.0681
PCA: 8 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.0515
PCA: 8 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 8 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 8 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 8 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.0396
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0308
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.1003
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0739
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0372
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0460
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0621
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.1152
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0750
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0659
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0589
PCA: 8 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0573
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0609
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0463
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1171
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0137
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1187
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0673
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1339
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0233
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0723
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0423
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0054
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0186
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1434
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0347
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1135
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0673
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1339
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0233
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0723
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0423
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0989
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0176
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1150
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0323
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1135
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0673
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1339
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0233
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.1008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0008
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0723
PCA: 8 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0423
PCA: 9 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.0003
PCA: 9 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.0004
PCA: 9 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.0014
PCA: 9 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.0107
PCA: 9 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 9 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 9 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 9 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.1474
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1170
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0481
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0934
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: -0.0253
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0695
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1102
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0846
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0813
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0735
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0220
PCA: 9 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0632
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1864
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3820
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.3518
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3805
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1061
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3115
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.5124
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3116
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1489
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3623
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0002 | R²: -0.6279
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.4410
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1475
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.4005
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.4519
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3907
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1059
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3115
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.5124
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3116
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1489
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3623
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0002 | R²: -0.6279
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.4410
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1854
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.4184
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.4730
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3657
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1059
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3115
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.5124
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.3116
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1489
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.3623
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0002 | R²: -0.6279
PCA: 9 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.4410
PCA: 10 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0015
PCA: 10 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0015
PCA: 10 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0022
PCA: 10 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.0008
PCA: 10 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 10 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 10 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 10 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0219
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0385
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0275
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0538
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: -0.0431
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0304
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0047
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0142
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0428
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0467
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0191
PCA: 10 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0478
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0683
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0298
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1092
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0936
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0039
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0024
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1460
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0769
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0269
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0988
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1538
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0761
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0923
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0005
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1628
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0943
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0038
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0025
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1460
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0768
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0269
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0988
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1538
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0761
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0944
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0156
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1585
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0707
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0038
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0025
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1460
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0768
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0269
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0988
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1538
PCA: 10 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0761
PCA: 11 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0937
PCA: 11 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0931
PCA: 11 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0873
PCA: 11 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: 0.0465
PCA: 11 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 11 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 11 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 11 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0678
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0110
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0164
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0325
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0487
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0436
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0107
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0088
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0179
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0445
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0144
PCA: 11 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0029
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0428
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0433
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0630
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0465
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0457
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0471
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1514
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0626
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0133
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0084
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1066
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1012
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0544
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0036
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0903
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0311
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0457
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0471
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1514
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0626
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0133
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0084
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1066
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1012
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0713
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0219
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1218
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0500
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0457
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0471
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1514
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0626
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0133
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0084
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1066
PCA: 11 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1012
PCA: 12 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0936
PCA: 12 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0930
PCA: 12 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0873
PCA: 12 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: 0.0462
PCA: 12 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 12 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 12 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 12 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.0318
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0375
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0077
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0241
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: -0.0458
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0215
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0208
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0118
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0321
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0356
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0005
PCA: 12 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0056
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0842
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0711
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1534
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1414
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0952
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0641
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2622
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2554
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0998
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1394
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2459
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1721
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0947
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0786
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1760
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1542
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0952
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0640
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2621
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2553
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0998
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1394
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2459
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1721
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1053
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0882
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2080
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2040
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0952
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0640
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2621
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2553
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0998
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1394
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2459
PCA: 12 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1721
PCA: 13 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0805
PCA: 13 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0834
PCA: 13 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0848
PCA: 13 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: 0.0458
PCA: 13 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 13 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 13 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 13 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0852
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0557
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0370
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0083
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: -0.0239
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0859
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0502
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0406
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0223
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0778
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0242
PCA: 13 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0421
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1185
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1308
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1321
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1313
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0659
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2277
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2369
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2293
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1606
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1949
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1580
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1601
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1176
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1784
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2145
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1771
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0660
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2276
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2369
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2293
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1606
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1949
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1580
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1601
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1624
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2035
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2404
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2090
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0660
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2276
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2369
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2293
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1606
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1949
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1580
PCA: 13 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1601
PCA: 14 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0901
PCA: 14 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0912
PCA: 14 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0873
PCA: 14 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: 0.0461
PCA: 14 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 14 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 14 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 14 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: -0.0912
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0232
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0406
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0387
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0104
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0478
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0032
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0295
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: -0.0070
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0390
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0178
PCA: 14 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0419
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1176
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1037
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1315
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1313
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0486
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1442
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2910
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2731
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1015
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1903
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2488
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1639
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1329
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1309
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2108
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2180
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0487
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1442
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2910
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2731
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1015
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1903
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2488
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1639
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1076
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1375
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2415
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2433
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0487
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1442
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2910
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.2731
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1015
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1903
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.2488
PCA: 14 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1639
PCA: 15 | Model: Ridge Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: 0.0936
PCA: 15 | Model: Ridge Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: 0.0936
PCA: 15 | Model: Ridge Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: 0.0878
PCA: 15 | Model: Ridge Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: 0.0461
PCA: 15 | Model: Lasso Regression | Params: {'alpha': 0.01} | MSE: 0.0001 | R²: -0.1439
PCA: 15 | Model: Lasso Regression | Params: {'alpha': 0.1} | MSE: 0.0001 | R²: -0.1439
PCA: 15 | Model: Lasso Regression | Params: {'alpha': 1} | MSE: 0.0001 | R²: -0.1439
PCA: 15 | Model: Lasso Regression | Params: {'alpha': 10} | MSE: 0.0001 | R²: -0.1439
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': None} | MSE: 0.0001 | R²: 0.0367
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0492
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0074
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 100, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0010
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': None} | MSE: 0.0001 | R²: 0.0543
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 5} | MSE: 0.0001 | R²: 0.0483
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0169
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 200, 'max_depth': 20} | MSE: 0.0001 | R²: -0.0067
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': None} | MSE: 0.0001 | R²: 0.0324
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 5} | MSE: 0.0001 | R²: -0.0434
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 10} | MSE: 0.0001 | R²: 0.0089
PCA: 15 | Model: Random Forest Regressor | Params: {'n_estimators': 500, 'max_depth': 20} | MSE: 0.0001 | R²: 0.0148
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.1103
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1168
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1353
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1090
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0432
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1448
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1417
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0932
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0376
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2728
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0930
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 100, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0749
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0373
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1431
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1471
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1366
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0430
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1448
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1416
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0931
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0376
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2728
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0930
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0749
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0151
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1384
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1597
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 10} | MSE: 0.0001 | R²: -0.1263
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 3} | MSE: 0.0001 | R²: 0.0430
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 5} | MSE: 0.0001 | R²: -0.1448
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 7} | MSE: 0.0001 | R²: -0.1416
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.1, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0931
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 3} | MSE: 0.0001 | R²: -0.0376
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 5} | MSE: 0.0001 | R²: -0.2728
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 7} | MSE: 0.0001 | R²: -0.0930
PCA: 15 | Model: XGBoost Regressor | Params: {'n_estimators': 500, 'learning_rate': 0.2, 'max_depth': 10} | MSE: 0.0001 | R²: -0.0749

Best Model: XGBoost Regressor | PCA Components: 6 | Params: {'n_estimators': 200, 'learning_rate': 0.2, 'max_depth': 5} | R²: 0.2590

from sklearn.linear_model import LinearRegression
from sklearn.model_selection import ParameterGrid

# Load weekly data
crypto_weekly = pd.read_csv('../../data/processed-data/weekly_crypto_returns.csv')
fx_weekly = pd.read_csv('../../data/processed-data/weekly_fx_rates.csv')

# Combine datasets
crypto_weekly.set_index('Date', inplace=True)
fx_weekly.set_index('Date', inplace=True)
combined_weekly = pd.concat([crypto_weekly, fx_weekly], axis=1).dropna()

# Standardize
scaler = StandardScaler()
crypto_scaled = scaler.fit_transform(crypto_weekly)

# Define models and parameters
model_classes = {
    "Linear Regression": LinearRegression,
    "Ridge Regression": Ridge,
    "Lasso Regression": Lasso,
    "Random Forest Regressor": RandomForestRegressor,
    "XGBoost Regressor": XGBRegressor
}

param_grids = {
    "Linear Regression": {},
    "Ridge Regression": {"alpha": [0.01, 0.1, 1.0, 10.0]},
    "Lasso Regression": {"alpha": [0.001, 0.01, 0.1, 1.0]},
    "Random Forest Regressor": {
        "n_estimators": [100, 200, 300], 
        "max_depth": [3, 5, 7]
    },
    "XGBoost Regressor": {
        "n_estimators": [100, 200, 300], 
        "learning_rate": [0.01, 0.1, 0.2], 
        "max_depth": [3, 5, 7]
    }
}

# Define function to evaluate models
def evaluate_model(X, y, model):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    mse = mean_squared_error(y_test, y_pred)
    r2 = r2_score(y_test, y_pred)
    return mse, r2, model.get_params()

# Loop through each FX target
results = {}
for fx in fx_weekly.columns:
    print(f"\nEvaluating models for target: {fx}")
    best_n_components = 0
    best_r2_score = float('-inf')
    best_model_name = None
    best_model_params = None

    for n_components in range(2, 16):
        # Apply PCA
        pca = PCA(n_components=n_components)
        crypto_pca = pca.fit_transform(crypto_scaled)
        pca_df = pd.DataFrame(crypto_pca, columns=[f'PC{i+1}' for i in range(n_components)])
        pca_df['Target'] = fx_weekly[fx].values

        # Define X and y
        X = pca_df.drop(columns=['Target'])
        y = pca_df['Target']

        # Evaluate all models and parameter grids
        for model_name, params in param_grids.items():
            if params:
                for param_comb in ParameterGrid(params):
                    model = model_classes[model_name](**param_comb)
                    mse, r2, _ = evaluate_model(X, y, model)
                    if r2 > best_r2_score:
                        best_r2_score = r2
                        best_n_components = n_components
                        best_model_name = model_name
                        best_model_params = param_comb
            else:
                model = model_classes[model_name]()
                mse, r2, _ = evaluate_model(X, y, model)
                if r2 > best_r2_score:
                    best_r2_score = r2
                    best_n_components = n_components
                    best_model_name = model_name
                    best_model_params = {}

    # Store best results
    results[fx] = {
        'Best Model': best_model_name,
        'Best PCA Components': best_n_components,
        'Best R²': best_r2_score,
        'Best Parameters': best_model_params
    }

# Print the best results
for fx, result in results.items():
    print(f"{fx}: Best Model: {result['Best Model']} | PCA Components: {result['Best PCA Components']} | R²: {result['Best R²']:.4f} | Parameters: {result['Best Parameters']}")

Evaluating models for target: AUDUSD=X

Evaluating models for target: CNY=X

Evaluating models for target: EURUSD=X

Evaluating models for target: GBPUSD=X

Evaluating models for target: HKD=X

Evaluating models for target: IDR=X

Evaluating models for target: INR=X

Evaluating models for target: JPY=X

Evaluating models for target: MXN=X

Evaluating models for target: MYR=X

Evaluating models for target: NZDUSD=X

Evaluating models for target: PHP=X

Evaluating models for target: RUB=X

Evaluating models for target: SGD=X

Evaluating models for target: THB=X

Evaluating models for target: ZAR=X
AUDUSD=X: Best Model: Linear Regression | PCA Components: 9 | R²: 0.2612 | Parameters: {}
CNY=X: Best Model: XGBoost Regressor | PCA Components: 13 | R²: -0.0749 | Parameters: {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 100}
EURUSD=X: Best Model: Linear Regression | PCA Components: 15 | R²: 0.1253 | Parameters: {}
GBPUSD=X: Best Model: Linear Regression | PCA Components: 10 | R²: 0.2705 | Parameters: {}
HKD=X: Best Model: Random Forest Regressor | PCA Components: 14 | R²: -0.0172 | Parameters: {'max_depth': 5, 'n_estimators': 200}
IDR=X: Best Model: XGBoost Regressor | PCA Components: 9 | R²: 0.3671 | Parameters: {'learning_rate': 0.2, 'max_depth': 7, 'n_estimators': 200}
INR=X: Best Model: XGBoost Regressor | PCA Components: 2 | R²: 0.2541 | Parameters: {'learning_rate': 0.2, 'max_depth': 7, 'n_estimators': 100}
JPY=X: Best Model: XGBoost Regressor | PCA Components: 7 | R²: 0.5238 | Parameters: {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 200}
MXN=X: Best Model: Random Forest Regressor | PCA Components: 14 | R²: 0.3221 | Parameters: {'max_depth': 5, 'n_estimators': 100}
MYR=X: Best Model: XGBoost Regressor | PCA Components: 12 | R²: 0.4150 | Parameters: {'learning_rate': 0.2, 'max_depth': 3, 'n_estimators': 200}
NZDUSD=X: Best Model: Linear Regression | PCA Components: 10 | R²: 0.2109 | Parameters: {}
PHP=X: Best Model: XGBoost Regressor | PCA Components: 3 | R²: 0.1190 | Parameters: {'learning_rate': 0.2, 'max_depth': 5, 'n_estimators': 200}
RUB=X: Best Model: XGBoost Regressor | PCA Components: 3 | R²: -0.0386 | Parameters: {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 200}
SGD=X: Best Model: Ridge Regression | PCA Components: 11 | R²: 0.1322 | Parameters: {'alpha': 1.0}
THB=X: Best Model: XGBoost Regressor | PCA Components: 3 | R²: 0.0012 | Parameters: {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 100}
ZAR=X: Best Model: Linear Regression | PCA Components: 12 | R²: 0.3935 | Parameters: {}
# Visualization of R²
fx_symbols = list(results.keys())
r2_scores = [results[fx]['Best R²'] for fx in fx_symbols]
plt.figure(figsize=(12, 8))
plt.bar(fx_symbols, r2_scores, color='skyblue')
plt.xticks(rotation=45)
plt.title("Best R² Scores for Different FX Currencies")
plt.xlabel("FX Currency")
plt.ylabel("Best R² Score")
plt.grid(axis='y', linestyle='--')
plt.show()

# Updated Best Models with Parameters
best_models = {
    "JPY=X": ("XGBoost Regressor", 7, {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 200}),
    "IDR=X": ("XGBoost Regressor", 9, {'learning_rate': 0.2, 'max_depth': 7, 'n_estimators': 200}),
    "ZAR=X": ("Linear Regression", 12, {}),
    "MXN=X": ("XGBoost Regressor", 11, {'learning_rate': 0.2, 'max_depth': 5, 'n_estimators': 300}),
    "MYR=X": ("XGBoost Regressor", 12, {'learning_rate': 0.2, 'max_depth': 3, 'n_estimators': 200}),
}

# Visualization Functions
def plot_error_distribution(y_true, y_pred, title):
    errors = y_true - y_pred
    plt.hist(errors, bins=20, color='skyblue', edgecolor='k', alpha=0.7)
    plt.title(f'Error Distribution - {title}')
    plt.xlabel('Prediction Error')
    plt.ylabel('Frequency')
    plt.grid(True, linestyle='--', alpha=0.7)
    plt.show()

def plot_residuals(y_true, y_pred, title):
    residuals = y_true - y_pred
    plt.scatter(y_pred, residuals, alpha=0.6, edgecolor='k')
    plt.axhline(y=0, color='r', linestyle='--', lw=2)
    plt.title(f'Residuals Plot - {title}')
    plt.xlabel('Predicted Values')
    plt.ylabel('Residuals')
    plt.grid(True, linestyle='--', alpha=0.7)
    plt.show()

def plot_feature_importance(model, feature_names, title):
    if hasattr(model, 'coef_'):
        importances = np.abs(model.coef_)
    elif hasattr(model, 'feature_importances_'):
        importances = model.feature_importances_
    else:
        print(f"Model {model} does not support feature importance.")
        return
    sorted_idx = np.argsort(importances)[::-1]
    plt.bar(range(len(importances)), importances[sorted_idx], color='skyblue', edgecolor='k')
    plt.xticks(range(len(importances)), np.array(feature_names)[sorted_idx], rotation=90)
    plt.title(f'Feature Importance - {title}')
    plt.xlabel('Features')
    plt.ylabel('Importance Score')
    plt.grid(True, linestyle='--', alpha=0.7)
    plt.show()

# Generate Updated Visualizations
for fx, (model_name, pc_count, params) in best_models.items():
    pca = PCA(n_components=pc_count)
    crypto_pca = pca.fit_transform(crypto_scaled)
    pca_df = pd.DataFrame(crypto_pca, columns=[f'PC{i+1}' for i in range(pca.n_components_)])
    pca_df['Target'] = fx_weekly[fx].values

    # Split data
    X = pca_df.drop(columns=['Target'])
    y = pca_df['Target']
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    best_model = model_classes[model_name](**params)
    best_model.fit(X_train, y_train)
    y_best_pred = best_model.predict(X_test)

    # Generate Visualizations
    plot_error_distribution(y_test, y_best_pred, f"{fx} - {model_name}")
    plot_residuals(y_test, y_best_pred, f"{fx} - {model_name}")
    plot_feature_importance(best_model, X.columns, f"{fx} - {model_name}")

Binary Classification Analysis

We attempted to fit models to the overall weekly rise and fall of the foreign exchange market. After experimenting with various combinations of principal component numbers, models, and model parameters, it was found that logistic regression and SVM performed relatively well.

The logistic regression model demonstrates an accuracy of 63% and a ROC-AUC of 0.74. It exhibited greater reliability in predicting downward scenarios though less accuracy in predicting upward scenarios.

SVM performed slightly better, with an accuracy of 0.68 and a ROC-AUC of 0.66 demonstrated a more balanced performance compared to logistic regression. However, it still exhibited limitations in overall performance.

# Import necessary libraries
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, roc_auc_score, roc_curve

# Load weekly data
crypto_weekly = pd.read_csv('../../data/processed-data/weekly_crypto_returns.csv')
fx_weekly = pd.read_csv('../../data/processed-data/weekly_fx_rates.csv')

# Preprocess Data
crypto_weekly_numeric = crypto_weekly.drop(columns=['Date'], errors='ignore').select_dtypes(include=[np.number])
fx_weekly_numeric = fx_weekly.drop(columns=['Date'], errors='ignore').select_dtypes(include=[np.number])

# Add Labels
fx_weekly['FX_Label'] = fx_weekly_numeric.mean(axis=1).apply(lambda x: 1 if x > 0 else 0)

# Define Features and Target
X = crypto_weekly_numeric
y = fx_weekly['FX_Label']

# Standardize Features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split Data
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=22)

# Define Models and Parameter Grids
param_grids = {
    "Logistic Regression": {"C": [0.01, 0.1, 1, 10, 100]},
    "SVM": {"C": [0.01, 0.1, 1, 10, 100], "kernel": ['linear', 'rbf']},
    "Random Forest": {"n_estimators": [100, 200, 300], "max_depth": [3, 5, 7]},
    "XGBoost": {"n_estimators": [100, 200, 300], "learning_rate": [0.01, 0.1, 0.2], "max_depth": [3, 5, 7]}
}

# Model Classes
model_classes = {
    "Logistic Regression": LogisticRegression,
    "SVM": SVC,
    "Random Forest": RandomForestClassifier,
    "XGBoost": XGBClassifier
}

# Train and Evaluate Models
best_results = {}

for model_name, param_grid in param_grids.items():
    best_acc = 0
    best_auc = 0
    best_params = None

    for params in ParameterGrid(param_grid):
        model = model_classes[model_name](**params)
        model.fit(X_train, y_train)
        y_pred = model.predict(X_test)
        y_prob = model.predict_proba(X_test)[:, 1] if hasattr(model, "predict_proba") else y_pred

        acc = accuracy_score(y_test, y_pred)
        auc = roc_auc_score(y_test, y_prob)

        if auc > best_auc:
            best_acc = acc
            best_auc = auc
            best_params = params

    # Store Best Results
    best_results[model_name] = {
        "Accuracy": best_acc,
        "ROC-AUC": best_auc,
        "Best Parameters": best_params
    }

    print(f"\n{model_name} - Best Accuracy: {best_acc:.4f}, Best ROC-AUC: {best_auc:.4f}, Best Params: {best_params}")

    # Final Evaluation with Best Model
    best_model = model_classes[model_name](**best_params)
    best_model.fit(X_train, y_train)
    y_best_pred = best_model.predict(X_test)
    y_best_prob = best_model.predict_proba(X_test)[:, 1] if hasattr(best_model, "predict_proba") else y_best_pred

    # Print Reports and Confusion Matrix
    print(f"\nClassification Report for {model_name}:\n")
    print(classification_report(y_test, y_best_pred))

    conf_matrix = confusion_matrix(y_test, y_best_pred)
    print(f"\nConfusion Matrix for {model_name}:\n")
    print(conf_matrix)

    # Plot ROC Curve
    fpr, tpr, _ = roc_curve(y_test, y_best_prob, pos_label=1)
    plt.figure(figsize=(8, 6))
    plt.plot(fpr, tpr, label=f'{model_name} (AUC = {best_auc:.4f})')
    plt.plot([0, 1], [0, 1], linestyle='--', color='gray')
    plt.title(f"ROC Curve: {model_name}")
    plt.xlabel("False Positive Rate")
    plt.ylabel("True Positive Rate")
    plt.legend()
    plt.grid(True, linestyle='--', alpha=0.7)
    plt.show()

Logistic Regression - Best Accuracy: 0.6316, Best ROC-AUC: 0.7381, Best Params: {'C': 0.1}

Classification Report for Logistic Regression:

              precision    recall  f1-score   support

           0       0.65      0.92      0.76        12
           1       0.50      0.14      0.22         7

    accuracy                           0.63        19
   macro avg       0.57      0.53      0.49        19
weighted avg       0.59      0.63      0.56        19


Confusion Matrix for Logistic Regression:

[[11  1]
 [ 6  1]]


SVM - Best Accuracy: 0.6842, Best ROC-AUC: 0.6607, Best Params: {'C': 100, 'kernel': 'linear'}

Classification Report for SVM:

              precision    recall  f1-score   support

           0       0.75      0.75      0.75        12
           1       0.57      0.57      0.57         7

    accuracy                           0.68        19
   macro avg       0.66      0.66      0.66        19
weighted avg       0.68      0.68      0.68        19


Confusion Matrix for SVM:

[[9 3]
 [3 4]]


Random Forest - Best Accuracy: 0.4737, Best ROC-AUC: 0.6190, Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report for Random Forest:

              precision    recall  f1-score   support

           0       0.57      0.67      0.62        12
           1       0.20      0.14      0.17         7

    accuracy                           0.47        19
   macro avg       0.39      0.40      0.39        19
weighted avg       0.43      0.47      0.45        19


Confusion Matrix for Random Forest:

[[8 4]
 [6 1]]


XGBoost - Best Accuracy: 0.5263, Best ROC-AUC: 0.5833, Best Params: {'learning_rate': 0.2, 'max_depth': 3, 'n_estimators': 300}

Classification Report for XGBoost:

              precision    recall  f1-score   support

           0       0.60      0.75      0.67        12
           1       0.25      0.14      0.18         7

    accuracy                           0.53        19
   macro avg       0.42      0.45      0.42        19
weighted avg       0.47      0.53      0.49        19


Confusion Matrix for XGBoost:

[[9 3]
 [6 1]]

Multivariate Classification Analysis

We attempted to fit models to the overall weekly rise and fall of the foreign exchange market. After experimenting with various combinations of principal component numbers, models, and model parameters, it was found that Logistic Regression and SVM performed relatively well. The accuracy of Logistic Regression is 0.58, which is relatively more balanced in predicting the three categories. The accuracy of SVM is 0.53, and both suffer from a lack of predictive ability for extreme market conditions compared to Logistic Regression.

Subsequently, an attempt was made to forecast individual FX rise and fall, with the objective of identifying the best forecasting model and assessment for each FX. The forecasts for the Japanese Yen (JPY=X), Indonesian Rupee (IDR=X), and Mexican Peso (MXN=X) demonstrate superior performance, and still for the Japanese Yen, achieving the highest accuracy of 0.68. Consequently, the models for these three currencies were selected for evaluation, and the corresponding confusion matrix plots were generated.

# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, ParameterGrid
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score

# Suppress warnings
import warnings
warnings.filterwarnings("ignore")

# Load weekly data
fx_weekly = pd.read_csv('../../data/processed-data/weekly_fx_rates.csv')
fx_weekly.set_index('Date', inplace=True)
crypto_weekly = pd.read_csv('../../data/processed-data/weekly_crypto_returns.csv')
crypto_weekly.set_index('Date', inplace=True)

# Function to Categorize
def categorize_market_trend(data):
    overall_change = data.mean(axis=1)
    upper_quartile = overall_change.quantile(0.75)
    lower_quartile = overall_change.quantile(0.25)
    def assign_label(value):
        if value > upper_quartile:
            return 2
        elif value <= lower_quartile:
            return 0
        else:
            return 1
    
    data['Market_Category'] = overall_change.apply(assign_label)
    return data
fx_weekly = categorize_market_trend(fx_weekly)

# Standardize data
scaler = StandardScaler()
crypto_scaled = scaler.fit_transform(crypto_weekly)

# Split data
X = crypto_scaled
y = fx_weekly['Market_Category']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define models and params
model_classes = {
    "Logistic Regression": LogisticRegression,
    "SVM": SVC,
    "Random Forest": RandomForestClassifier,
    "XGBoost": XGBClassifier
}
param_grids = {
    "Logistic Regression": {"C": [0.01, 0.1, 1, 10], "multi_class": ['multinomial']},
    "SVM": {"C": [0.1, 1, 10], "kernel": ['linear', 'rbf']},
    "Random Forest": {"n_estimators": [100, 200, 300], "max_depth": [3, 5, 7]},
    "XGBoost": {
        "n_estimators": [100, 200, 300],
        "learning_rate": [0.01, 0.1, 0.2],
        "max_depth": [3, 5, 7]
    }
}

# Training and evaluation
def evaluate_model(model, X_train, X_test, y_train, y_test):
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    report = classification_report(y_test, y_pred)
    matrix = confusion_matrix(y_test, y_pred)
    acc = accuracy_score(y_test, y_pred)
    return acc, report, matrix

best_results = {}
for model_name, param_grid in param_grids.items():
    best_score = -np.inf
    best_model = None
    best_params = None

    for params in ParameterGrid(param_grid):
        model = model_classes[model_name](**params)
        acc, report, matrix = evaluate_model(model, X_train, X_test, y_train, y_test)
        
        if acc > best_score:
            best_score = acc
            best_model = model
            best_params = params
    
    # Store
    best_results[model_name] = {
        "Best Accuracy": best_score,
        "Best Params": best_params,
        "Report": report,
        "Confusion Matrix": matrix
    }

# Print results
for model_name, result in best_results.items():
    print(f"\n{model_name} - Best Accuracy: {result['Best Accuracy']:.4f}")
    print(f"Best Params: {result['Best Params']}")
    print(f"\nClassification Report:\n{result['Report']}")
    print(f"Confusion Matrix:\n{result['Confusion Matrix']}\n")
Market_Category
1    45
0    23
2    23
Name: count, dtype: int64

Logistic Regression - Best Accuracy: 0.5789
Best Params: {'C': 0.1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.50      0.78      0.61         9
           2       1.00      0.14      0.25         7

    accuracy                           0.47        19
   macro avg       0.58      0.42      0.38        19
weighted avg       0.64      0.47      0.43        19

Confusion Matrix:
[[1 2 0]
 [2 7 0]
 [1 5 1]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.62      0.56      0.59         9
           2       0.80      0.57      0.67         7

    accuracy                           0.47        19
   macro avg       0.48      0.38      0.42        19
weighted avg       0.59      0.47      0.52        19

Confusion Matrix:
[[0 2 1]
 [4 5 0]
 [2 1 4]]


Random Forest - Best Accuracy: 0.4737
Best Params: {'max_depth': 3, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.33      0.33         3
           1       0.43      0.67      0.52         9
           2       0.50      0.14      0.22         7

    accuracy                           0.42        19
   macro avg       0.42      0.38      0.36        19
weighted avg       0.44      0.42      0.38        19

Confusion Matrix:
[[1 2 0]
 [2 6 1]
 [0 6 1]]


XGBoost - Best Accuracy: 0.4737
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.14      0.33      0.20         3
           1       0.62      0.56      0.59         9
           2       0.50      0.29      0.36         7

    accuracy                           0.42        19
   macro avg       0.42      0.39      0.38        19
weighted avg       0.50      0.42      0.44        19

Confusion Matrix:
[[1 1 1]
 [3 5 1]
 [3 2 2]]
# Load weekly data
fx_weekly = pd.read_csv('../../data/processed-data/weekly_fx_rates.csv')
crypto_weekly = pd.read_csv('../../data/processed-data/weekly_crypto_returns.csv')

# Set Date as index
fx_weekly.set_index('Date', inplace=True)
crypto_weekly.set_index('Date', inplace=True)

# Function to Categorize
def categorize_currency_trend(data):
    upper_quartile = data.quantile(0.75)
    lower_quartile = data.quantile(0.25)
    def assign_label(value):
        if value > upper_quartile:
            return 2
        elif value <= lower_quartile:
            return 0
        else:
            return 1
    return data.apply(assign_label)

# Standardize data
scaler = StandardScaler()
crypto_scaled = scaler.fit_transform(crypto_weekly)

# Define models and params
model_classes = {
    "Logistic Regression": LogisticRegression,
    "SVM": SVC,
    "Random Forest": RandomForestClassifier,
    "XGBoost": XGBClassifier
}
param_grids = {
    "Logistic Regression": {"C": [0.01, 0.1, 1, 10], "multi_class": ['multinomial']},
    "SVM": {"C": [0.1, 1, 10], "kernel": ['linear', 'rbf']},
    "Random Forest": {"n_estimators": [100, 200, 300], "max_depth": [3, 5, 7]},
    "XGBoost": {
        "n_estimators": [100, 200, 300],
        "learning_rate": [0.01, 0.1, 0.2],
        "max_depth": [3, 5, 7],
        "use_label_encoder": [False]
    }
}

# Evaluation function
def evaluate_model(model, X_train, X_test, y_train, y_test):
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    report = classification_report(y_test, y_pred)
    matrix = confusion_matrix(y_test, y_pred)
    acc = accuracy_score(y_test, y_pred)
    return report, matrix, acc

# Training and evaluation per FX currency
best_results = {}
for fx in fx_weekly.columns:
    print(f"\nEvaluating models for FX currency: {fx}")
    y = categorize_currency_trend(fx_weekly[fx])

    # Split data
    X_train, X_test, y_train, y_test = train_test_split(
        crypto_scaled, y, test_size=0.2, random_state=42
    )

    best_fx_result = {}
    for model_name, param_grid in param_grids.items():
        best_score = -np.inf
        best_model = None
        best_params = None

        for params in ParameterGrid(param_grid):
            model = model_classes[model_name](**params)
            report, matrix, acc = evaluate_model(model, X_train, X_test, y_train, y_test)
            
            if acc > best_score:
                best_score = acc
                best_model = model
                best_params = params
        
        # Store best results for this FX
        best_fx_result[model_name] = {
            "Best Accuracy": best_score,
            "Best Params": best_params,
            "Report": report,
            "Confusion Matrix": matrix
        }
    
    # Print results
    print(f"\nResults for {fx}:")
    for model_name, result in best_fx_result.items():
        print(f"\n{model_name} - Best Accuracy: {result['Best Accuracy']:.4f}")
        print(f"Best Params: {result['Best Params']}")
        print(f"\nClassification Report:\n{result['Report']}")
        print(f"Confusion Matrix:\n{result['Confusion Matrix']}\n")

Evaluating models for FX currency: AUDUSD=X

Results for AUDUSD=X:

Logistic Regression - Best Accuracy: 0.5789
Best Params: {'C': 0.1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.67      0.50         3
           1       0.50      0.50      0.50        10
           2       0.50      0.33      0.40         6

    accuracy                           0.47        19
   macro avg       0.47      0.50      0.47        19
weighted avg       0.48      0.47      0.47        19

Confusion Matrix:
[[2 1 0]
 [3 5 2]
 [0 4 2]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.67      0.40      0.50        10
           2       0.33      0.50      0.40         6

    accuracy                           0.42        19
   macro avg       0.42      0.41      0.40        19
weighted avg       0.50      0.42      0.43        19

Confusion Matrix:
[[1 2 0]
 [0 4 6]
 [3 0 3]]


Random Forest - Best Accuracy: 0.4211
Best Params: {'max_depth': 3, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.55      0.60      0.57        10
           2       0.40      0.33      0.36         6

    accuracy                           0.42        19
   macro avg       0.32      0.31      0.31        19
weighted avg       0.41      0.42      0.42        19

Confusion Matrix:
[[0 2 1]
 [2 6 2]
 [1 3 2]]


XGBoost - Best Accuracy: 0.5263
Best Params: {'learning_rate': 0.2, 'max_depth': 3, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.50      0.40      0.44        10
           2       0.33      0.50      0.40         6

    accuracy                           0.37        19
   macro avg       0.28      0.30      0.28        19
weighted avg       0.37      0.37      0.36        19

Confusion Matrix:
[[0 2 1]
 [1 4 5]
 [1 2 3]]


Evaluating models for FX currency: CNY=X

Results for CNY=X:

Logistic Regression - Best Accuracy: 0.4737
Best Params: {'C': 0.1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.50      0.78      0.61         9
           2       1.00      0.14      0.25         7

    accuracy                           0.42        19
   macro avg       0.50      0.31      0.29        19
weighted avg       0.61      0.42      0.38        19

Confusion Matrix:
[[0 3 0]
 [2 7 0]
 [2 4 1]]


SVM - Best Accuracy: 0.4737
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.56      0.56      0.56         9
           2       0.40      0.29      0.33         7

    accuracy                           0.37        19
   macro avg       0.32      0.28      0.30        19
weighted avg       0.41      0.37      0.39        19

Confusion Matrix:
[[0 2 1]
 [2 5 2]
 [3 2 2]]


Random Forest - Best Accuracy: 0.4211
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.47      0.78      0.58         9
           2       1.00      0.14      0.25         7

    accuracy                           0.42        19
   macro avg       0.49      0.31      0.28        19
weighted avg       0.59      0.42      0.37        19

Confusion Matrix:
[[0 3 0]
 [2 7 0]
 [1 5 1]]


XGBoost - Best Accuracy: 0.5263
Best Params: {'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 300, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.45      0.56      0.50         9
           2       0.67      0.29      0.40         7

    accuracy                           0.37        19
   macro avg       0.37      0.28      0.30        19
weighted avg       0.46      0.37      0.38        19

Confusion Matrix:
[[0 2 1]
 [4 5 0]
 [1 4 2]]


Evaluating models for FX currency: EURUSD=X

Results for EURUSD=X:

Logistic Regression - Best Accuracy: 0.4737
Best Params: {'C': 0.1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.67      0.57         3
           1       0.42      0.56      0.48         9
           2       0.33      0.14      0.20         7

    accuracy                           0.42        19
   macro avg       0.42      0.46      0.42        19
weighted avg       0.40      0.42      0.39        19

Confusion Matrix:
[[2 1 0]
 [2 5 2]
 [0 6 1]]


SVM - Best Accuracy: 0.5789
Best Params: {'C': 10, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.67      0.57         3
           1       0.62      0.56      0.59         9
           2       0.57      0.57      0.57         7

    accuracy                           0.58        19
   macro avg       0.57      0.60      0.58        19
weighted avg       0.59      0.58      0.58        19

Confusion Matrix:
[[2 1 0]
 [1 5 3]
 [1 2 4]]


Random Forest - Best Accuracy: 0.4211
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.29      0.67      0.40         3
           1       0.55      0.67      0.60         9
           2       0.00      0.00      0.00         7

    accuracy                           0.42        19
   macro avg       0.28      0.44      0.33        19
weighted avg       0.30      0.42      0.35        19

Confusion Matrix:
[[2 1 0]
 [2 6 1]
 [3 4 0]]


XGBoost - Best Accuracy: 0.5789
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.67      0.44         3
           1       0.62      0.56      0.59         9
           2       0.40      0.29      0.33         7

    accuracy                           0.47        19
   macro avg       0.45      0.50      0.46        19
weighted avg       0.50      0.47      0.47        19

Confusion Matrix:
[[2 0 1]
 [2 5 2]
 [2 3 2]]


Evaluating models for FX currency: GBPUSD=X

Results for GBPUSD=X:

Logistic Regression - Best Accuracy: 0.4211
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         5
           1       0.47      0.88      0.61         8
           2       0.00      0.00      0.00         6

    accuracy                           0.37        19
   macro avg       0.16      0.29      0.20        19
weighted avg       0.20      0.37      0.26        19

Confusion Matrix:
[[0 4 1]
 [1 7 0]
 [2 4 0]]


SVM - Best Accuracy: 0.4211
Best Params: {'C': 0.1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         5
           1       0.45      0.62      0.53         8
           2       0.50      0.50      0.50         6

    accuracy                           0.42        19
   macro avg       0.32      0.38      0.34        19
weighted avg       0.35      0.42      0.38        19

Confusion Matrix:
[[0 3 2]
 [2 5 1]
 [0 3 3]]


Random Forest - Best Accuracy: 0.5263
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       1.00      0.20      0.33         5
           1       0.44      0.88      0.58         8
           2       0.50      0.17      0.25         6

    accuracy                           0.47        19
   macro avg       0.65      0.41      0.39        19
weighted avg       0.61      0.47      0.41        19

Confusion Matrix:
[[1 4 0]
 [0 7 1]
 [0 5 1]]


XGBoost - Best Accuracy: 0.5263
Best Params: {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.40      0.44         5
           1       0.29      0.25      0.27         8
           2       0.38      0.50      0.43         6

    accuracy                           0.37        19
   macro avg       0.39      0.38      0.38        19
weighted avg       0.37      0.37      0.36        19

Confusion Matrix:
[[2 2 1]
 [2 2 4]
 [0 3 3]]


Evaluating models for FX currency: HKD=X

Results for HKD=X:

Logistic Regression - Best Accuracy: 0.5263
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.20      0.25         5
           1       0.45      0.50      0.48        10
           2       0.20      0.25      0.22         4

    accuracy                           0.37        19
   macro avg       0.33      0.32      0.32        19
weighted avg       0.37      0.37      0.36        19

Confusion Matrix:
[[1 3 1]
 [2 5 3]
 [0 3 1]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 0.1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.67      0.40      0.50         5
           1       0.54      0.70      0.61        10
           2       0.33      0.25      0.29         4

    accuracy                           0.53        19
   macro avg       0.51      0.45      0.46        19
weighted avg       0.53      0.53      0.51        19

Confusion Matrix:
[[2 3 0]
 [1 7 2]
 [0 3 1]]


Random Forest - Best Accuracy: 0.5263
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         5
           1       0.47      0.80      0.59        10
           2       0.00      0.00      0.00         4

    accuracy                           0.42        19
   macro avg       0.16      0.27      0.20        19
weighted avg       0.25      0.42      0.31        19

Confusion Matrix:
[[0 5 0]
 [1 8 1]
 [0 4 0]]


XGBoost - Best Accuracy: 0.5789
Best Params: {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       1.00      0.20      0.33         5
           1       0.58      0.70      0.64        10
           2       0.33      0.50      0.40         4

    accuracy                           0.53        19
   macro avg       0.64      0.47      0.46        19
weighted avg       0.64      0.53      0.51        19

Confusion Matrix:
[[1 3 1]
 [0 7 3]
 [0 2 2]]


Evaluating models for FX currency: IDR=X

Results for IDR=X:

Logistic Regression - Best Accuracy: 0.5263
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.50      0.44         4
           1       0.42      0.50      0.45        10
           2       0.00      0.00      0.00         5

    accuracy                           0.37        19
   macro avg       0.27      0.33      0.30        19
weighted avg       0.30      0.37      0.33        19

Confusion Matrix:
[[2 2 0]
 [3 5 2]
 [0 5 0]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 0.1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.50      0.33         4
           1       0.33      0.20      0.25        10
           2       0.40      0.40      0.40         5

    accuracy                           0.32        19
   macro avg       0.33      0.37      0.33        19
weighted avg       0.33      0.32      0.31        19

Confusion Matrix:
[[2 2 0]
 [5 2 3]
 [1 2 2]]


Random Forest - Best Accuracy: 0.6316
Best Params: {'max_depth': 3, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.50      0.44         4
           1       0.50      0.50      0.50        10
           2       0.50      0.40      0.44         5

    accuracy                           0.47        19
   macro avg       0.47      0.47      0.46        19
weighted avg       0.48      0.47      0.47        19

Confusion Matrix:
[[2 2 0]
 [3 5 2]
 [0 3 2]]


XGBoost - Best Accuracy: 0.6316
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.50      0.44         4
           1       0.55      0.60      0.57        10
           2       0.67      0.40      0.50         5

    accuracy                           0.53        19
   macro avg       0.54      0.50      0.51        19
weighted avg       0.55      0.53      0.53        19

Confusion Matrix:
[[2 2 0]
 [3 6 1]
 [0 3 2]]


Evaluating models for FX currency: INR=X

Results for INR=X:

Logistic Regression - Best Accuracy: 0.3158
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.14      0.18         7
           1       0.17      0.29      0.21         7
           2       0.33      0.20      0.25         5

    accuracy                           0.21        19
   macro avg       0.25      0.21      0.21        19
weighted avg       0.24      0.21      0.21        19

Confusion Matrix:
[[1 6 0]
 [3 2 2]
 [0 4 1]]


SVM - Best Accuracy: 0.3684
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         7
           1       0.14      0.29      0.19         7
           2       0.00      0.00      0.00         5

    accuracy                           0.11        19
   macro avg       0.05      0.10      0.06        19
weighted avg       0.05      0.11      0.07        19

Confusion Matrix:
[[0 7 0]
 [2 2 3]
 [0 5 0]]


Random Forest - Best Accuracy: 0.3158
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.29      0.33         7
           1       0.31      0.57      0.40         7
           2       0.00      0.00      0.00         5

    accuracy                           0.32        19
   macro avg       0.24      0.29      0.24        19
weighted avg       0.26      0.32      0.27        19

Confusion Matrix:
[[2 5 0]
 [2 4 1]
 [1 4 0]]


XGBoost - Best Accuracy: 0.4211
Best Params: {'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.29      0.36         7
           1       0.30      0.43      0.35         7
           2       0.20      0.20      0.20         5

    accuracy                           0.32        19
   macro avg       0.33      0.30      0.31        19
weighted avg       0.35      0.32      0.32        19

Confusion Matrix:
[[2 4 1]
 [1 3 3]
 [1 3 1]]


Evaluating models for FX currency: JPY=X

Results for JPY=X:

Logistic Regression - Best Accuracy: 0.4737
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         5
           1       0.47      0.78      0.58         9
           2       0.33      0.20      0.25         5

    accuracy                           0.42        19
   macro avg       0.27      0.33      0.28        19
weighted avg       0.31      0.42      0.34        19

Confusion Matrix:
[[0 4 1]
 [1 7 1]
 [0 4 1]]


SVM - Best Accuracy: 0.6842
Best Params: {'C': 10, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.67      0.40      0.50         5
           1       0.67      0.89      0.76         9
           2       0.75      0.60      0.67         5

    accuracy                           0.68        19
   macro avg       0.69      0.63      0.64        19
weighted avg       0.69      0.68      0.67        19

Confusion Matrix:
[[2 3 0]
 [0 8 1]
 [1 1 3]]


Random Forest - Best Accuracy: 0.5263
Best Params: {'max_depth': 3, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.20      0.25         5
           1       0.50      0.78      0.61         9
           2       0.50      0.20      0.29         5

    accuracy                           0.47        19
   macro avg       0.44      0.39      0.38        19
weighted avg       0.46      0.47      0.43        19

Confusion Matrix:
[[1 4 0]
 [1 7 1]
 [1 3 1]]


XGBoost - Best Accuracy: 0.4737
Best Params: {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.67      0.40      0.50         5
           1       0.46      0.67      0.55         9
           2       0.00      0.00      0.00         5

    accuracy                           0.42        19
   macro avg       0.38      0.36      0.35        19
weighted avg       0.39      0.42      0.39        19

Confusion Matrix:
[[2 2 1]
 [1 6 2]
 [0 5 0]]


Evaluating models for FX currency: MXN=X

Results for MXN=X:

Logistic Regression - Best Accuracy: 0.4737
Best Params: {'C': 10, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.25      0.33         4
           1       0.43      0.75      0.55         8
           2       0.67      0.29      0.40         7

    accuracy                           0.47        19
   macro avg       0.53      0.43      0.43        19
weighted avg       0.53      0.47      0.45        19

Confusion Matrix:
[[1 3 0]
 [1 6 1]
 [0 5 2]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 10, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.29      0.50      0.36         4
           1       0.57      0.50      0.53         8
           2       0.80      0.57      0.67         7

    accuracy                           0.53        19
   macro avg       0.55      0.52      0.52        19
weighted avg       0.60      0.53      0.55        19

Confusion Matrix:
[[2 2 0]
 [3 4 1]
 [2 1 4]]


Random Forest - Best Accuracy: 0.5789
Best Params: {'max_depth': 5, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.29      0.50      0.36         4
           1       0.50      0.62      0.56         8
           2       0.50      0.14      0.22         7

    accuracy                           0.42        19
   macro avg       0.43      0.42      0.38        19
weighted avg       0.45      0.42      0.39        19

Confusion Matrix:
[[2 1 1]
 [3 5 0]
 [2 4 1]]


XGBoost - Best Accuracy: 0.5789
Best Params: {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.40      0.50      0.44         4
           1       0.55      0.75      0.63         8
           2       0.67      0.29      0.40         7

    accuracy                           0.53        19
   macro avg       0.54      0.51      0.49        19
weighted avg       0.56      0.53      0.51        19

Confusion Matrix:
[[2 1 1]
 [2 6 0]
 [1 4 2]]


Evaluating models for FX currency: MYR=X

Results for MYR=X:

Logistic Regression - Best Accuracy: 0.4737
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.17      0.25      0.20         4
           1       0.45      0.50      0.48        10
           2       0.50      0.20      0.29         5

    accuracy                           0.37        19
   macro avg       0.37      0.32      0.32        19
weighted avg       0.41      0.37      0.37        19

Confusion Matrix:
[[1 3 0]
 [4 5 1]
 [1 3 1]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.25      0.25         4
           1       0.55      0.60      0.57        10
           2       0.75      0.60      0.67         5

    accuracy                           0.53        19
   macro avg       0.52      0.48      0.50        19
weighted avg       0.54      0.53      0.53        19

Confusion Matrix:
[[1 3 0]
 [3 6 1]
 [0 2 3]]


Random Forest - Best Accuracy: 0.4737
Best Params: {'max_depth': 5, 'n_estimators': 300}

Classification Report:
              precision    recall  f1-score   support

           0       0.14      0.25      0.18         4
           1       0.45      0.50      0.48        10
           2       1.00      0.20      0.33         5

    accuracy                           0.37        19
   macro avg       0.53      0.32      0.33        19
weighted avg       0.53      0.37      0.38        19

Confusion Matrix:
[[1 3 0]
 [5 5 0]
 [1 3 1]]


XGBoost - Best Accuracy: 0.4211
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.17      0.25      0.20         4
           1       0.44      0.40      0.42        10
           2       0.50      0.40      0.44         5

    accuracy                           0.37        19
   macro avg       0.37      0.35      0.36        19
weighted avg       0.40      0.37      0.38        19

Confusion Matrix:
[[1 3 0]
 [4 4 2]
 [1 2 2]]


Evaluating models for FX currency: NZDUSD=X

Results for NZDUSD=X:

Logistic Regression - Best Accuracy: 0.4211
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.67      0.44         3
           1       0.45      0.50      0.48        10
           2       0.00      0.00      0.00         6

    accuracy                           0.37        19
   macro avg       0.26      0.39      0.31        19
weighted avg       0.29      0.37      0.32        19

Confusion Matrix:
[[2 1 0]
 [3 5 2]
 [1 5 0]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.44      0.40      0.42        10
           2       0.57      0.67      0.62         6

    accuracy                           0.42        19
   macro avg       0.34      0.36      0.35        19
weighted avg       0.41      0.42      0.42        19

Confusion Matrix:
[[0 3 0]
 [3 4 3]
 [0 2 4]]


Random Forest - Best Accuracy: 0.5263
Best Params: {'max_depth': 5, 'n_estimators': 200}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.50      0.60      0.55        10
           2       0.67      0.33      0.44         6

    accuracy                           0.42        19
   macro avg       0.39      0.31      0.33        19
weighted avg       0.47      0.42      0.43        19

Confusion Matrix:
[[0 3 0]
 [3 6 1]
 [1 3 2]]


XGBoost - Best Accuracy: 0.4211
Best Params: {'learning_rate': 0.01, 'max_depth': 7, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.40      0.40      0.40        10
           2       0.33      0.33      0.33         6

    accuracy                           0.32        19
   macro avg       0.24      0.24      0.24        19
weighted avg       0.32      0.32      0.32        19

Confusion Matrix:
[[0 2 1]
 [3 4 3]
 [0 4 2]]


Evaluating models for FX currency: PHP=X

Results for PHP=X:

Logistic Regression - Best Accuracy: 0.3684
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.17      0.25         6
           1       0.21      0.43      0.29         7
           2       0.00      0.00      0.00         6

    accuracy                           0.21        19
   macro avg       0.24      0.20      0.18        19
weighted avg       0.24      0.21      0.18        19

Confusion Matrix:
[[1 5 0]
 [1 3 3]
 [0 6 0]]


SVM - Best Accuracy: 0.4211
Best Params: {'C': 10, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.50      0.50         6
           1       0.33      0.43      0.38         7
           2       0.50      0.33      0.40         6

    accuracy                           0.42        19
   macro avg       0.44      0.42      0.42        19
weighted avg       0.44      0.42      0.42        19

Confusion Matrix:
[[3 3 0]
 [2 3 2]
 [1 3 2]]


Random Forest - Best Accuracy: 0.4737
Best Params: {'max_depth': 5, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.33      0.40         6
           1       0.40      0.57      0.47         7
           2       0.40      0.33      0.36         6

    accuracy                           0.42        19
   macro avg       0.43      0.41      0.41        19
weighted avg       0.43      0.42      0.41        19

Confusion Matrix:
[[2 3 1]
 [1 4 2]
 [1 3 2]]


XGBoost - Best Accuracy: 0.3158
Best Params: {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.50      0.17      0.25         6
           1       0.27      0.43      0.33         7
           2       0.33      0.33      0.33         6

    accuracy                           0.32        19
   macro avg       0.37      0.31      0.31        19
weighted avg       0.36      0.32      0.31        19

Confusion Matrix:
[[1 4 1]
 [1 3 3]
 [0 4 2]]


Evaluating models for FX currency: RUB=X

Results for RUB=X:

Logistic Regression - Best Accuracy: 0.4211
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.46      0.67      0.55         9
           2       1.00      0.14      0.25         7

    accuracy                           0.37        19
   macro avg       0.49      0.27      0.27        19
weighted avg       0.59      0.37      0.35        19

Confusion Matrix:
[[0 3 0]
 [3 6 0]
 [2 4 1]]


SVM - Best Accuracy: 0.4737
Best Params: {'C': 0.1, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         3
           1       0.46      0.67      0.55         9
           2       0.50      0.14      0.22         7

    accuracy                           0.37        19
   macro avg       0.32      0.27      0.26        19
weighted avg       0.40      0.37      0.34        19

Confusion Matrix:
[[0 3 0]
 [2 6 1]
 [2 4 1]]


Random Forest - Best Accuracy: 0.5263
Best Params: {'max_depth': 5, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.50      0.78      0.61         9
           2       1.00      0.14      0.25         7

    accuracy                           0.47        19
   macro avg       0.58      0.42      0.38        19
weighted avg       0.64      0.47      0.43        19

Confusion Matrix:
[[1 2 0]
 [2 7 0]
 [1 5 1]]


XGBoost - Best Accuracy: 0.5263
Best Params: {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.50      0.67      0.57         9
           2       0.67      0.29      0.40         7

    accuracy                           0.47        19
   macro avg       0.47      0.43      0.42        19
weighted avg       0.52      0.47      0.46        19

Confusion Matrix:
[[1 2 0]
 [2 6 1]
 [1 4 2]]


Evaluating models for FX currency: SGD=X

Results for SGD=X:

Logistic Regression - Best Accuracy: 0.5789
Best Params: {'C': 0.1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.58      0.64      0.61        11
           2       0.67      0.40      0.50         5

    accuracy                           0.53        19
   macro avg       0.50      0.46      0.46        19
weighted avg       0.55      0.53      0.53        19

Confusion Matrix:
[[1 2 0]
 [3 7 1]
 [0 3 2]]


SVM - Best Accuracy: 0.6316
Best Params: {'C': 0.1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.67      0.55      0.60        11
           2       0.33      0.40      0.36         5

    accuracy                           0.47        19
   macro avg       0.42      0.43      0.42        19
weighted avg       0.51      0.47      0.49        19

Confusion Matrix:
[[1 2 0]
 [1 6 4]
 [2 1 2]]


Random Forest - Best Accuracy: 0.4737
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.20      0.33      0.25         3
           1       0.50      0.55      0.52        11
           2       0.00      0.00      0.00         5

    accuracy                           0.37        19
   macro avg       0.23      0.29      0.26        19
weighted avg       0.32      0.37      0.34        19

Confusion Matrix:
[[1 2 0]
 [3 6 2]
 [1 4 0]]


XGBoost - Best Accuracy: 0.4211
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 100, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.17      0.33      0.22         3
           1       0.43      0.27      0.33        11
           2       0.17      0.20      0.18         5

    accuracy                           0.26        19
   macro avg       0.25      0.27      0.25        19
weighted avg       0.32      0.26      0.28        19

Confusion Matrix:
[[1 1 1]
 [4 3 4]
 [1 3 1]]


Evaluating models for FX currency: THB=X

Results for THB=X:

Logistic Regression - Best Accuracy: 0.5263
Best Params: {'C': 0.01, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.67      0.44         3
           1       0.33      0.44      0.38         9
           2       0.00      0.00      0.00         7

    accuracy                           0.32        19
   macro avg       0.22      0.37      0.28        19
weighted avg       0.21      0.32      0.25        19

Confusion Matrix:
[[2 1 0]
 [4 4 1]
 [0 7 0]]


SVM - Best Accuracy: 0.5263
Best Params: {'C': 10, 'kernel': 'rbf'}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.33      0.29         3
           1       0.58      0.78      0.67         9
           2       0.67      0.29      0.40         7

    accuracy                           0.53        19
   macro avg       0.50      0.47      0.45        19
weighted avg       0.56      0.53      0.51        19

Confusion Matrix:
[[1 2 0]
 [1 7 1]
 [2 3 2]]


Random Forest - Best Accuracy: 0.4737
Best Params: {'max_depth': 3, 'n_estimators': 100}

Classification Report:
              precision    recall  f1-score   support

           0       0.20      0.33      0.25         3
           1       0.43      0.67      0.52         9
           2       0.00      0.00      0.00         7

    accuracy                           0.37        19
   macro avg       0.21      0.33      0.26        19
weighted avg       0.23      0.37      0.29        19

Confusion Matrix:
[[1 2 0]
 [3 6 0]
 [1 6 0]]


XGBoost - Best Accuracy: 0.4211
Best Params: {'learning_rate': 0.01, 'max_depth': 3, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.67      0.36         3
           1       0.50      0.56      0.53         9
           2       0.00      0.00      0.00         7

    accuracy                           0.37        19
   macro avg       0.25      0.41      0.30        19
weighted avg       0.28      0.37      0.31        19

Confusion Matrix:
[[2 1 0]
 [3 5 1]
 [3 4 0]]


Evaluating models for FX currency: ZAR=X

Results for ZAR=X:

Logistic Regression - Best Accuracy: 0.4211
Best Params: {'C': 1, 'multi_class': 'multinomial'}

Classification Report:
              precision    recall  f1-score   support

           0       0.33      0.20      0.25         5
           1       0.38      0.71      0.50         7
           2       0.67      0.29      0.40         7

    accuracy                           0.42        19
   macro avg       0.46      0.40      0.38        19
weighted avg       0.48      0.42      0.40        19

Confusion Matrix:
[[1 4 0]
 [1 5 1]
 [1 4 2]]


SVM - Best Accuracy: 0.3684
Best Params: {'C': 0.1, 'kernel': 'linear'}

Classification Report:
              precision    recall  f1-score   support

           0       0.14      0.20      0.17         5
           1       0.50      0.57      0.53         7
           2       0.50      0.29      0.36         7

    accuracy                           0.37        19
   macro avg       0.38      0.35      0.35        19
weighted avg       0.41      0.37      0.37        19

Confusion Matrix:
[[1 3 1]
 [2 4 1]
 [4 1 2]]


Random Forest - Best Accuracy: 0.4211
Best Params: {'max_depth': 5, 'n_estimators': 300}

Classification Report:
              precision    recall  f1-score   support

           0       0.00      0.00      0.00         5
           1       0.40      0.86      0.55         7
           2       1.00      0.14      0.25         7

    accuracy                           0.37        19
   macro avg       0.47      0.33      0.27        19
weighted avg       0.52      0.37      0.29        19

Confusion Matrix:
[[0 5 0]
 [1 6 0]
 [2 4 1]]


XGBoost - Best Accuracy: 0.4737
Best Params: {'learning_rate': 0.2, 'max_depth': 5, 'n_estimators': 200, 'use_label_encoder': False}

Classification Report:
              precision    recall  f1-score   support

           0       0.25      0.20      0.22         5
           1       0.42      0.71      0.53         7
           2       0.67      0.29      0.40         7

    accuracy                           0.42        19
   macro avg       0.44      0.40      0.38        19
weighted avg       0.46      0.42      0.40        19

Confusion Matrix:
[[1 4 0]
 [1 5 1]
 [2 3 2]]
from sklearn.metrics import ConfusionMatrixDisplay
import seaborn as sns

# Function of confusion matrix
def plot_confusion_matrix(y_true, y_pred, title):
    cm = confusion_matrix(y_true, y_pred)
    sns.heatmap(cm, annot=True, cmap="Blues", fmt="d", cbar=False)
    plt.title(f"Confusion Matrix - {title}")
    plt.xlabel("Predicted")
    plt.ylabel("Actual")
    plt.show()

# Apply to best models
selected_models = {
    "JPY=X": ("SVM", {'C': 10, 'kernel': 'rbf'}),
    "IDR=X": ("Random Forest", {'max_depth': 3, 'n_estimators': 200}),
    "MXN=X": ("XGBoost", {'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 100, 'use_label_encoder': False})
}

for fx, (model_name, params) in selected_models.items():
    y = fx_weekly[fx].apply(lambda x: 2 if x > fx_weekly[fx].quantile(0.75)
                            else (0 if x <= fx_weekly[fx].quantile(0.25) else 1))
    X_train, X_test, y_train, y_test = train_test_split(crypto_scaled, y, test_size=0.2, random_state=42)
    if model_name == "SVM":
        model = SVC(probability=True, **params)
    elif model_name == "Random Forest":
        model = RandomForestClassifier(random_state=22, **params)
    elif model_name == "XGBoost":
        model = XGBClassifier(**params)
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    plot_confusion_matrix(y_test, y_pred, f"{fx} - {model_name}")

Discussion

Result Interpretation

Through the application of modeling analysis and forecasting of data from the cryptocurrency market and the foreign exchange market, it was determined that the performance of the models exhibited notable discrepancies in regression analysis, binary classification analysis, and multiclassification analysis. The regression analysis indicates that forecasting overall FX market volatility is a challenging endeavor. However, the data exhibits superior predictive performance when examining individual currencies, particularly the Japanese Yen and Mexican Peso. On the other hand, the results of the categorical analysis indicate that the model exhibits some capacity for distinguishing between fx market upturns and downturns. However, it may be less effective in predicting extreme volatility.

Model Performance Comparison

The XGBoost regression model demonstrates superior performance in forecasting volatility across the entire foreign exchange market. Conversely, the XGBoost regression and linear regression models exhibit greater efficacy in predicting the movements of individual FX currencies. In binary and multiclassification analysis, logistic regression and support vector machine models demonstrate superior performance in forecasting the magnitude of overall FX market movements, including rises, falls, and volatility in categories. However, both Random Forest and XGBoost classification models also exhibit comparable predictive capabilities in forecasting the performance of individual FX currencies.

Insights Gained

  • The intricate, non-linear relationships inherent to the FX market render comprehensive market forecasting an exceptionally challenging endeavor, as evidenced by the performance of the models in question.

  • The modelling of data for individual currencies demonstrates some predictive potential, indicating that the model may be more sensitive to specific market environments and currency pairs.

  • The integrated learning algorithm XGBoost demonstrated robust performance across a range of analysis, while linear and logistic regression exhibited relatively stable results on tasks with a straightforward structure.