I'm looking for an elegant way to compute the "product" of a discrete convolution, instead of the sum.
Here is the formula of a discrete convolution:

In this case we can use: conv(x,y)
Now I would like to implement those operations

Of course I can use a loop, but I'm looking for a trick in order to linearize this operation.
EXAMPLE:
f = [2 4 3 9 7 1]
g = [3 2 1]
dist = length(g)-1;
for ii = 1:length(f)-dist
x(ii) = prod(f(ii:ii+dist).*g)
end
x =
144 648 1134 378
cumprod solution: (very efficient)>> pf = cumprod(f);
>> x = prod(g).*pf(numel(g):end)./[1 pf(1:(end-numel(g)))]
x =
144 648 1134 378
This first takes the cumulative product of f using cumprod. By dividing each element by the cumulative product 3 elements before it, we get the product of each numel(g)-wide sliding window along f. Then just multiply by the product of the elements of g.
NOTE: When f has many elements, or extreme values (large or small), you could run into issues with accuracy or underflow/overflow when performing the cumulative product. One potential way to mitigate this would be to apply a scaling to f before the cumulative product, then undo it afterward:
c = ...set a scaling factor...
pf = cumprod(f./c);
x = prod(c.*g).*pf(numel(g):end)./[1 pf(1:(end-numel(g)))];
The choice for c could be something like mean(abs(f)) or max(abs(f)) so that the scaled f gives results that are better bounded (i.e. values closer to 1). This doesn't appreciably change the timing results below.
hankel solution: (not as efficient but still interesting)>> x = prod(g).*prod(hankel(f(1:numel(g)), f(numel(g):end)))
x =
144 648 1134 378
The call to hankel creates a matrix where each column has the contents of one of the numel(g)-wide sliding windows in it. Taking the product down each column then multiplying by the product of the elements of g gives your answer. However, for large vectors f and/or g this could involve a lot of extra computation and use a lot of memory.
I tested 6 solutions (the loop in your question, 2 solutions from rahnema (conv(log) and movsum(log)), the bsxfun solution from Luis, and my cumprod and hankel solutions) using f = rand(1,1000000); and g = rand(1,100); and averaging over 40 iterations. Here's what I got (running Windows 7 x64, 16 GB RAM, MATLAB R2016b):
solution | avg. time (s)
------------+---------------
loop | 1.10671
conv(log) | 0.04984
movsum(log) | 0.03736
bsxfun | 1.20472
cumprod | 0.01469
hankel | 1.17704
Another solution partly inspired by Dev-iL answer to relatively the same question
exp(sum(log(g))+conv(log(f),[1 1 1],'valid'))
or
exp(sum(log(g))+movsum(log(f),numel(g),'Endpoints', 'discard'))
since exp(sum(log(x))) = prod(x)
But here instead of one vector we have two vectors f and g.
The desired formula can be reformulated as:

Timing in octave:
f= rand(1,1000000);
g= rand(1,100);
disp('----------EXP(LOG)------')
tic
exp(sum(log(g))+conv(log(f),ones(1,numel(g))));
toc
disp('----------BSXFUN------')
tic
ind = bsxfun(@plus, 0:numel(f)-numel(g), (1:numel(g)).');
x = prod(bsxfun(@times, f(ind), g(:)),1);
toc
disp('----------HANKEL------')
tic
prod(g)*prod(hankel(f(1:numel(g)), f(numel(g):end)));
toc
disp('----------CUMPROD-----')
tic
pf = cumprod(f);
x = prod(g)*pf(numel(g):end)./[1 pf(1:(end-numel(g)))];
toc
Result:
----------EXP(LOG)------%rahnema1
Elapsed time is 0.211445 seconds.
----------BSXFUN--------%Luis Mendo
Elapsed time is 1.94182 seconds.
----------HANKEL--------%gnovice
Elapsed time is 1.46593 seconds.
----------CUMPROD-------%gnovice
Elapsed time is 0.00748992 seconds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With