I was looking at lecture 2 of the stanford machine learning lecture series taught by professor Andrew NG and had a question about something that I think might be quite rudimentary but is just not clicking in my head. So let us consider two vectors θ and x where both vectors contain real numbers.
Let h(x) be a function (in this specific case called the hypothesis) but let it be some function denoted by :
h(x) = "summation from i = 0 to i = n" of θ(i)*x(i) = θ(transpose)*x
I dont understand the last part where he says h(x) is also equal to θ(transpose)*x.
If someone could clarify this concept for my I would be very grateful.
It's just basic linear algebra, it follows from the definition of matrix vector multiplication:
So if θ and x are both n+1 x 1 matrices, then
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With