Is it stupid to do l2 normalization with sklearns Normalizer for a correlation analysis on this type of dataset?

Related articles:

The Pros and Cons of Using Sklearn's Normalizer for L2 Normalization in Correlation Analysis
Correlation analysis is a widely used statistical technique in many fields including finance, social sciences, and engineering. One of the most common types of correlation analysis is the Pearson correlation coefficient, which measures the linear relationship between two variables.

Why L2 Normalization with Sklearn's Normalizer Might Not Be the Best Option for Analyzing Certain Datasets
Sklearn is a machine learning library in Python that offers many useful tools for data preprocessing and analysis. One such tool is Normalizer, which is a data preprocessing technique used to scale the input data by normalizing it to a unit length.

Making Sense of L2 Normalization: When to Use (and When to Avoid) Sklearn's Normalizer for Correlation Analysis
In data analysis, it is common to perform correlation analysis to understand the relationship between variables. Correlation analysis involves calculating the correlation coefficient between two variables. There are various ways to calculate the correlation coefficient, such as the Pearson coefficient, Spearman rank coefficient, and Kendall tau coefficient. In this article, we will focus on the Pearson correlation coefficient.