Final Up to date on November 15, 2022

Two-dimensional tensors are analogous to two-dimensional metrics. Like a two-dimensional metric, a two-dimensional tensor additionally has $n$ variety of rows and columns.

Letâ€™s take a gray-scale picture for instance, which is a two-dimensional matrix of numeric values, generally often called pixels. Starting from â€˜0â€™ to â€˜255â€™, every quantity represents a pixel depth worth. Right here, the bottom depth quantity (which is â€˜0â€™) represents black areas within the picture whereas the best depth quantity (which is â€˜255â€™) represents white areas within the picture. Utilizing the PyTorch framework, this two-dimensional picture or matrix may be transformed to a two-dimensional tensor.

Within the earlier put up, we discovered about one-dimensional tensors in PyTorch and utilized some helpful tensor operations. On this tutorial, weâ€™ll apply these operations to two-dimensional tensors utilizing the PyTorch library. Particularly, weâ€™ll study:

- Learn how to create two-dimensional tensors in PyTorch and discover their varieties and shapes.
- About slicing and indexing operations on two-dimensional tensors intimately.
- To use a variety of strategies to tensors akin to, tensor addition, multiplication, and extra.

Letâ€™s get began.

## Tutorial Overview

This tutorial is split into elements; they’re:

- Varieties and shapes of two-dimensional tensors
- Changing two-dimensional tensors into NumPy arrays
- Changing pandas sequence to two-dimensional tensors
- Indexing and slicing operations on two-dimensional tensors
- Operations on two-dimensional tensors

**Varieties and Shapes of Two-Dimensional Tensors**

Letâ€™s first import a number of needed libraries weâ€™ll use on this tutorial.

import torch import numpy as np import pandas as pd |

To examine the categories and shapes of the two-dimensional tensors, weâ€™ll use the identical strategies from PyTorch, launched beforehand for one-dimensional tensors. However, ought to it work the identical approach it did for the one-dimensional tensors?

Letâ€™s reveal by changing a 2D record of integers to a 2D tensor object. For instance, weâ€™ll create a 2D record and apply `torch.tensor()`

for conversion.

example_2D_list = [[5, 10, 15, 20], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [25, 30, 35, 40], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [45, 50, 55, 60]] list_to_tensor = torch.tensor(example_2D_list) print(“Our New 2D Tensor from 2D Checklist is: “, list_to_tensor) |

Our New 2D Tensor from 2D Checklist is:Â Â tensor([[ 5, 10, 15, 20], Â Â Â Â Â Â Â Â [25, 30, 35, 40], Â Â Â Â Â Â Â Â [45, 50, 55, 60]]) |

As you may see, the `torch.tensor()`

technique additionally works nicely for the two-dimensional tensors. Now, letâ€™s use `form()`

, `dimension()`

, and `ndimension()`

strategies to return the form, dimension, and dimensions of a tensor object.

print(“Getting the form of tensor object: “, list_to_tensor.form) print(“Getting the dimensions of tensor object: “, list_to_tensor.dimension()) print(“Getting the scale of tensor object: “, list_to_tensor.ndimension()) |

print(“Getting the form of tensor object: “, list_to_tensor.form) print(“Getting the dimensions of tensor object: “, list_to_tensor.dimension()) print(“Getting the scale of tensor object: “, list_to_tensor.ndimension()) |

**Changing Two-Dimensional Tensors to NumPy Arrays**

PyTorch permits us to transform a two-dimensional tensor to a NumPy array after which again to a tensor. Letâ€™s learn the way.

# Changing two_D tensor to numpy array Â twoD_tensor_to_numpy = list_to_tensor.numpy() print(“Changing two_Dimensional tensor to numpy array:”) print(“Numpy array after conversion: “, twoD_tensor_to_numpy) print(“Information kind after conversion: “, twoD_tensor_to_numpy.dtype) Â print(“***************************************************************”) Â # Changing numpy array again to a tensor Â back_to_tensor = torch.from_numpy(twoD_tensor_to_numpy) print(“Changing numpy array again to two_Dimensional tensor:”) print(“Tensor after conversion:”, back_to_tensor) print(“Information kind after conversion: “, back_to_tensor.dtype) |

Changing two_Dimensional tensor to numpy array: Numpy array after conversion:Â Â [[ 5 10 15 20] [25 30 35 40] [45 50 55 60]] Information kind after conversion:Â Â int64 *************************************************************** Changing numpy array again to two_Dimensional tensor: Tensor after conversion: tensor([[ 5, 10, 15, 20], Â Â Â Â Â Â Â Â [25, 30, 35, 40], Â Â Â Â Â Â Â Â [45, 50, 55, 60]]) Information kind after conversion:Â Â torch.int64 |

**Changing Pandas Sequence to Two-Dimensional Tensors**

Equally, we are able to additionally convert a pandas DataFrame to a tensor. As with the one-dimensional tensors, weâ€™ll use the identical steps for the conversion. Utilizing values attribute weâ€™ll get the NumPy array after which use `torch.from_numpy`

that lets you convert a pandas DataFrame to a tensor.

Right here is how weâ€™ll do it.

# Changing Pandas Dataframe to a Tensor Â dataframe = pd.DataFrame({‘x’:[22,24,26],‘y’:[42,52,62]}) Â print(“Pandas to numpy conversion: “, dataframe.values) print(“Information kind earlier than tensor conversion: “, dataframe.values.dtype) Â print(“***********************************************”) Â pandas_to_tensor = torch.from_numpy(dataframe.values) print(“Getting new tensor: “, pandas_to_tensor) print(“Information kind after conversion to tensor: “, pandas_to_tensor.dtype) |

Pandas to numpy conversion:Â Â [[22 42] [24 52] [26 62]] Information kind earlier than tensor conversion:Â Â int64 *********************************************** Getting new tensor:Â Â tensor([[22, 42], Â Â Â Â Â Â Â Â [24, 52], Â Â Â Â Â Â Â Â [26, 62]]) Information kind after conversion to tensor:Â Â torch.int64 |

**Indexing and Slicing Operations on Two-Dimensional Tensors**

For indexing operations, totally different parts in a tensor object may be accessed utilizing sq. brackets. You may merely put corresponding indices in sq. brackets to entry the specified parts in a tensor.

Within the under instance, weâ€™ll create a tensor and entry sure parts utilizing two totally different strategies. Word that the index worth ought to all the time be one lower than the place the factor is positioned in a two-dimensional tensor.

example_tensor = torch.tensor([[10, 20, 30, 40], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [50, 60, 70, 80], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [90, 100, 110, 120]]) print(“Accessing factor in 2nd row and 2nd column: “, example_tensor[1, 1]) print(“Accessing factor in 2nd row and 2nd column: “, example_tensor[1][1]) Â print(“********************************************************”) Â print(“Accessing factor in third row and 4th column: “, example_tensor[2, 3]) print(“Accessing factor in third row and 4th column: “, example_tensor[2][3]) |

Accessing factor in 2nd row and 2nd column:Â Â tensor(60) Accessing factor in 2nd row and 2nd column:Â Â tensor(60) ******************************************************** Accessing factor in third row and 4th column:Â Â tensor(120) Accessing factor in third row and 4th column:Â Â tensor(120) |

What if we have to entry two or extra parts on the similar time? Thatâ€™s the place tensor slicing comes into play. Letâ€™s use the earlier instance to entry first two parts of the second row and first three parts of the third row.

example_tensor = torch.tensor([[10, 20, 30, 40], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [50, 60, 70, 80], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [90, 100, 110, 120]]) print(“Accessing first two parts of the second row: “, example_tensor[1, 0:2]) print(“Accessing first two parts of the second row: “, example_tensor[1][0:2]) Â print(“********************************************************”) Â print(“Accessing first three parts of the third row: “, example_tensor[2, 0:3]) print(“Accessing first three parts of the third row: “, example_tensor[2][0:3]) |

example_tensor = torch.tensor([[10, 20, 30, 40], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [50, 60, 70, 80], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [90, 100, 110, 120]]) print(“Accessing first two parts of the second row: “, example_tensor[1, 0:2]) print(“Accessing first two parts of the second row: “, example_tensor[1][0:2]) Â print(“********************************************************”) Â print(“Accessing first three parts of the third row: “, example_tensor[2, 0:3]) print(“Accessing first three parts of the third row: “, example_tensor[2][0:3]) |

**Operations on Two-Dimensional Tensors**

Whereas there are a whole lot of operations you may apply on two-dimensional tensors utilizing the PyTorch framework, right here, weâ€™ll introduce you to tensor addition, and scalar and matrix multiplication.

**Including Two-Dimensional Tensors**

Including two tensors is just like matrix addition. Itâ€™s fairly a straight ahead course of as you merely want an addition (+) operator to carry out the operation. Letâ€™s add two tensors within the under instance.

A = torch.tensor([[5, 10], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [50, 60], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [100, 200]]) B = torch.tensor([[10, 20], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [60, 70], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [200, 300]]) add = A + B print(“Including A and B to get: “, add) |

Including A and B to get:Â Â tensor([[ 15,Â Â 30], Â Â Â Â Â Â Â Â [110, 130], Â Â Â Â Â Â Â Â [300, 500]]) |

**Scalar and Matrix Multiplication of Two-Dimensional Tensors**

Scalar multiplication in two-dimensional tensors can be an identical to scalar multiplication in matrices. For example, by multiplying a tensor with a scalar, say a scalar 4, youâ€™ll be multiplying each factor in a tensor by 4.

new_tensor = torch.tensor([[1, 2, 3], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [4, 5, 6]]) mul_scalar = 4 * new_tensor print(“results of scalar multiplication: “, mul_scalar) |

results of scalar multiplication:Â Â tensor([[ 4,Â Â 8, 12], Â Â Â Â Â Â Â Â [16, 20, 24]]) |

Coming to the multiplication of the two-dimensional tensors, `torch.mm()`

in PyTorch makes issues simpler for us. Just like the matrix multiplication in linear algebra, variety of columns in tensor object A (i.e. 2Ã—3) should be equal to the variety of rows in tensor object B (i.e. 3Ã—2).

A = torch.tensor([[3, 2, 1], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [1, 2, 1]]) B = torch.tensor([[3, 2], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [1, 1], Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [2, 1]]) A_mult_B = torch.mm(A, B) print(“multiplying A with B: “, A_mult_B) |

multiplying A with B:Â Â tensor([[13,Â Â 9], Â Â Â Â Â Â Â Â [ 7,Â Â 5]]) |

## Additional Studying

Developed concurrently TensorFlow, PyTorch used to have a less complicated syntax till TensorFlow adopted Keras in its 2.x model. To study the fundamentals of PyTorch, it’s possible you’ll wish to learn the PyTorch tutorials:

Particularly the fundamentals of PyTorch tensor may be discovered within the Tensor tutorial web page:

There are additionally fairly a number of books on PyTorch which might be appropriate for newcomers. A extra just lately revealed guide must be advisable because the instruments and syntax are actively evolving. One instance is

**Abstract**

On this tutorial, you discovered about two-dimensional tensors in PyTorch.

Particularly, you discovered:

- Learn how to create two-dimensional tensors in PyTorch and discover their varieties and shapes.
- About slicing and indexing operations on two-dimensional tensors intimately.
- To use a variety of strategies to tensors akin to, tensor addition, multiplication, and extra.