PyTorch: How To Use Torch Zeros To Create a Tensor Filled With Zeros

Python’s libraries related to math, data science, and data analysis are capable of some truly amazing feats. Higher-level languages like Python typically sacrifice speed for flexibility and ease of use. But third-party libraries like PyTorch, Pandas and NumPy provide remarkably efficient data processing while still retaining Python’s unique flexibility.

It’s always best to combine a library’s unique data types with its native functionality so that we can retain that efficiency. For example, if you’re using PyTorch and wanted to fill a native tensor with zeros you’d want to use that library’s native functions. But before learning how to do so we need to take a quick look at how PyTorch handles tensors.

Tensors Might Be More Familiar Than You’d Imagine

People new to PyTorch might find the concept of tensors a little intimidating. Tensors are a mathematical concept that can be best understood as a multi-positional system that can track points over multiple dimensions. But machine learning systems like PyTorch leverage this concept for a number of different tasks. Using tensors for positional tracking is a classic example. But tensors are even used for deep learning and computational neural networks.

This might still sound rather complex. But tensors in PyTorch are roughly analogous to Python’s standard collection datatypes. And if you’ve used NumPy’s ndarray then you can essentially use your experience there with PyTorch’s tensors. NumPy’s multidimensional arrays and PyTorch’s tensors are functionally quite similar to each other. But with that in mind, what about specific functionality that could let us create a tensor pre-populated with a specific value like zero?

PyTorch’s Tensors Come With Added Functionality

If PyTorch’s tensors operate in the same way as standard Python collections, you might wonder why you can’t simply populate it through a for loop. And, in fact, you could do so. However, this isn’t a very efficient way to go about populating a tensor. The main reason comes down to how scientific libraries in Python operate with so much speed and efficiency.

Higher-level and interpreted languages don’t usually operate at extremely high speeds. However, there are a number of tricks that can be used in Python to gain extra speed for specific functionality. And libraries like PyTorch put special emphasis on getting as much power as possible. As such, when you use a scientific library’s native functionality you’ll typically see more efficient processing than you would with Python’s standard methods. So while you can manipulate tensors with standard for loops, it’s best to use native PyTorch methods when possible.

On top of standard optimizations, using PyTorch’s built-in functions also presents you with the opportunity to speed things up even more. PyTorch gives you the option to use CUDA (Compute Unified Device Architecture) if your GPU supports it. For example, take a look at tensor creation using CUDA as a quick PyTorch tutorial.

import torch as pt

ourDevice = pt.device(‘cuda’)
ourTensor = pt.tensor(([[1,2], [3, 4], [5, 6]]), device=ourDevice)
print(ourTensor)

We begin with a standard import of PyTorch as pt. We then initialize a device to use with PyTorch. CPU is the default device, but in this case we’ll use something different to highlight how PyTorch can optimize tensor-related functionality.

If your system supports CUDA then the tensor will be created using your GPU rather than CPU. This would be especially useful if you had a huge data collection and needed to work with multiple rapidly changing scalar value sets or even scalar arrays. However, using GPU acceleration will require both underlying support in your GPU and for PyTorch to have been compiled with CUDA enabled. If you wanted to use the standard CPU system then you’d just change line 3 to the following.

ourDevice = pt.device(‘cpu’)

It’s also fairly easy to work this into user-specific configurations. And if you omit device declaration in a function then it’ll fall back to the CPU. The main point is that you have a lot of additional options to tweak performance when you use PyTorch tensors with PyTorch functions. This wouldn’t be the case when using PyTorch tensors with standard Python functions. Keep this concept of optimization in mind as we continue working with tensors.

In line 4 we actually create our tensor. In this particular case, we populate it with specific data and specify a device for processing. Note that the device argument is optional. If there’s no declaration it’ll default to your CPU. Line 5 simply prints out the final tensor. But note the similarity of the output tensor data with standard Python collections. With that in mind, we can move on to creating a tensor filled with zero values for all positions.

Implementing and Optimizing a Tensor

Now, we can take everything we’ve covered to this point and create a tensor filled with zero values. Try out the following code sample.

import torch as pt

ourTensor = pt.zeros((2, 3))
print(ourTensor)
print(type(ourTensor))

We begin by importing PyTorch again. But we change things around a little in line two. Here we use a function in PyTorch called zeros. As the name suggests, it creates a tensor filled with zeros. We pass two variables as an argument to specify the size and dimensions. We then print the contents of the new ourTensor variable and proceed to do the same for its type. Note that we now have a PyTorch tensor filled with zeros.

However, remember the earlier note about optimization. A tensor predominantly filled with zeros is known as a sparse tensor. And a zero-filled layout opens up additional room for memory optimization. This is somewhat similar to the fact that a zip file filled with zeroed-out data can be optimized for space. For example, an empty virtualized hard drive might take up 100 GB of uncompressed space, but efficient compression could reduce it to mere KBs. Likewise having an input tensor that consists of zeroes opens up a lot of possibilities that we could make use of for heavily optimized code. This state once again highlights why it’s so important to use a library’s native functions whenever possible. Doing so opens up a lot of extra room for optimization.

PyTorch: How To Use Torch Zeros To Create a Tensor Filled With Zeros
Scroll to top