I know that, in the 1D case, the convolution between two vectors, a and b, can be computed as conv(a, b), but also as the product between the T_a and b, where T_a is the corresponding Toeplitz matrix for a.
Is it possible to extend this idea to 2D?
Given a = [5 1 3; 1 1 2; 2 1 3] and b=[4 3; 1 2], is it possible to convert a in a Toeplitz matrix and compute the matrix-matrix product between T_a and b as in the 1-D case?
Yes, it is possible and you should also use a doubly block circulant matrix (which is a special case of Toeplitz matrix). I will give you an example with a small size of kernel and the input, but it is possible to construct Toeplitz matrix for any kernel.
domain, you simply multiply the signal(X)-which is matrix with Signal(Y), which is also a matrix. So, now you will be able to understand that, Yes convolution is same as matrix multiplication(where matrix X and Y matrix of signal) but ONLY IN FREQUENCY DOMAIN. I hope it helps.
However, rather than sliding this kernel (e.g. using loops), we can perform the convolution operation "in one step" using a matrix-vector multiplication, where the matrix is a circulant matrix containing shifted versions of the kernel (as rows or columns) and the vector is the input.
Yes, it is possible and you should also use a doubly block circulant matrix (which is a special case of Toeplitz matrix). I will give you an example with a small size of kernel and the input, but it is possible to construct Toeplitz matrix for any kernel. So you have a 2d input x and 2d kernel k and you want to calculate the convolution x * k. Also let's assume that k is already flipped. Let's also assume that x is of size n×n and k is m×m.
So you unroll k into a sparse matrix of size (n-m+1)^2 × n^2, and unroll x into a long vector n^2 × 1. You compute a multiplication of this sparse matrix with a vector and convert the resulting vector (which will have a size (n-m+1)^2 × 1) into a n-m+1 square matrix.
I am pretty sure this is hard to understand just from reading. So here is an example for 2×2 kernel and 3×3 input.
* 
Here is a constructed matrix with a vector:

which is equal to
.
And this is the same result you would have got by doing a sliding window of k over x.
Let I be the input signal and F be the filter or kernel.

If the I is m1 x n1 and F is m2 x n2 the size of the output will be:

Zero pad the filter to make it the same size as the output.


Now all these small Toeplitz matrices should be arranged in a big doubly blocked Toeplitz matrix. 


This multiplication gives the convolution result.

For more details and python code take a look at my github repository:
Step by step explanation of 2D convolution implemented as matrix multiplication using toeplitz matrices in python
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With