Le contenu de cette page provient de l’excellent tutoriel de Jay Alammar :
Alammar, J (2019). A Visual Intro to NumPy and Data Representation [Blog post]. Retrieved from https://jalammar.github.io/visual-numpy/
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Après avoir lu et compris cette page, et en faisant quelques recherches et tutoriels sur Matplotlib, je vous propose ensuite de réaliser l’exercice suivant :
A Visual Intro to NumPy and Data Representation
The NumPy package is the workhorse of data analysis, machine learning, and scientific computing in the python ecosystem. It vastly simplifies manipulating and crunching vectors and matrices. Some of python’s leading package rely on NumPy as a fundamental piece of their infrastructure (examples include scikit-learn, SciPy, pandas, and tensorflow). Beyond the ability to slice and dice numeric data, mastering numpy will give you an edge when dealing and debugging with advanced usecases in these libraries.
In this post, we’ll look at some of the main ways to use NumPy and how it can represent different types of data (tables, images, text…etc) before we can serve them to machine learning models.
import numpy as np
We can create a NumPy array (a.k.a. the mighty ndarray) by passing a python list to it and using ` np.array()`. In this case, python creates the array we can see on the right here:
There are often cases when we want NumPy to initialize the values of the array for us. NumPy provides methods like ones(), zeros(), and random.random() for these cases. We just pass them the number of elements we want it to generate:
Once we’ve created our arrays, we can start to manipulate them in interesting ways.
Let’s create two NumPy arrays to showcase their usefulness. We’ll call them
Adding them up position-wise (i.e. adding the values of each row) is as simple as typing
data + ones:
When I started learning such tools, I found it refreshing that an abstraction like this makes me not have to program such a calculation in loops. It’s a wonderful abstraction that allows you to think about problems at a higher level.
And it’s not only addition that we can do this way:
There are often cases when we want to carry out an operation between an array and a single number (we can also call this an operation between a vector and a scalar). Say, for example, our array represents distance in miles, and we want to convert it to kilometers. We simply say
data * 1.6:
See how NumPy understood that operation to mean that the multiplication should happen with each cell? That concept is called broadcasting, and it’s very useful.
We can index and slice NumPy arrays in all the ways we can slice python lists:
Additional benefits NumPy gives us are aggregation functions:
In addition to
sum, you get all the greats like
mean to get the average,
prod to get the result of multiplying all the elements together,
std to get standard deviation, and plenty of others.
In more dimensions
All the examples we’ve looked at deal with vectors in one dimension. A key part of the beauty of NumPy is its ability to apply everything we’ve looked at so far to any number of dimensions.
We can pass python lists of lists in the following shape to have NumPy create a matrix to represent them:
We can also use the same methods we mentioned above (
random.random()) as long as we give them a tuple describing the dimensions of the matrix we are creating:
We can add and multiply matrices using arithmetic operators (
+-*/) if the two matrices are the same size. NumPy handles those as position-wise operations:
We can get away with doing these arithmetic operations on matrices of different size only if the different dimension is one (e.g. the matrix has only one column or one row), in which case NumPy uses its broadcast rules for that operation:
A key distinction to make with arithmetic is the case of matrix multiplication using the dot product. NumPy gives every matrix a
dot() method we can use to carry-out dot product operations with other matrices:
I’ve added matrix dimensions at the bottom of this figure to stress that the two matrices have to have the same dimension on the side they face each other with. You can visualize this operation as looking like this:
Indexing and slicing operations become even more useful when we’re manipulating matrices:
We can aggregate matrices the same way we aggregated vectors:
Not only can we aggregate all the values in a matrix, but we can also aggregate across the rows or columns by using the
Transposing and Reshaping
A common need when dealing with matrices is the need to rotate them. This is often the case when we need to take the dot product of two matrices and need to align the dimension they share. NumPy arrays have a convenient property called
T to get the transpose of a matrix:
In more advanced use case, you may find yourself needing to switch the dimensions of a certain matrix. This is often the case in machine learning applications where a certain model expects a certain shape for the inputs that is different from your dataset. NumPy’s
reshape() method is useful in these cases. You just pass it the new dimensions you want for the matrix. You can pass -1 for a dimension and NumPy can infer the correct dimension based on your matrix: