I'm trying to compute the stream function of a 2D flow given the x- and y- velocity components. I'm using this definition of stream function:

And I tried this method as suggested here, which basically suggests you to integrate one row of v-component, and integrate the u-component at all places, and add them up (if I understood correctly).
Here is my code:
from scipy import integrate
import numpy
# make some data
y=numpy.linspace(0,10,40)
x=numpy.linspace(0,10,50)
X,Y=numpy.meshgrid(x,y)
# a velocity field that is non-divergent
u=3*Y**2-3*X**2
v=6*X*Y
# integrate
intx=integrate.cumtrapz(v,X,axis=1,initial=0)[0]
inty=integrate.cumtrapz(u,Y,axis=0,initial=0)
psi1=-intx+inty
intx2=integrate.cumtrapz(v,X,axis=1,initial=0)
inty2=integrate.cumtrapz(u,Y,axis=0,initial=0)[:,0][:,None]
psi2=-intx2+inty2
psi=(psi1+psi2)/2.
u2=numpy.gradient(psi,axis=0)
v2=-numpy.gradient(psi,axis=1)
dx=numpy.gradient(X,axis=1)
dy=numpy.gradient(Y,axis=0)
u2=u2/dy
v2=v2/dx
My problem is the re-computed v2 and v are quite close, but u2 and u always get a slight offset (0.09861933 in this setup). Is this error related to the way the integration is computed? What's the recommended way of computing stream function from x- and y- flows?
Answer myself:
I guess this can get more complicated than this, but here is an attempt to solve a streamfunction + a velocity potential aiming at minimizing the difference between input u, v and the reconstructed ones.
(uploading in a rush, will come back and reformat the formula).
'''Solve streamfunction and velocity potential from given u and v.
u and v are given in an even grid (n x m).
streamfunction (psi) and velocity potential (chi) are defined on a dual
grid ((n+1) x (m+1)), where psi and chi are defined on the 4 corners
of u and v.
Define:
u = u_chi + u_psi
v = v_chi + v_psi
u_psi = -dpsi/dy
v_psi = dpsi/dx
u_chi = dchi/dx
v_chi = dchi/dy
Define 2 2x2 kernels:
k_x = |-0.5 0.5|
|-0.5 0.5| / dx
k_y = |-0.5 -0.5|
|0.5 0.5| / dy
Then u_chi = chi \bigotimes k_x
where \bigotimes is cross-correlation,
Similarly:
v_chi = chi \bigotimes k_y
u_psi = psi \bigotimes -k_y
v_psi = psi \bigotimes k_x
Define cost function J = (uhat - u)**2 + (vhat - v)**2
Gradients of chi and psi:
dJ/dchi = (uhat - u) du_chi/dchi + (vhat - v) dv_chi/dchi
dJ/dpsi = (uhat - u) du_psi/dpsi + (vhat - v) dv_psi/dpsi
du_chi/dchi = (uhat - u) \bigotimes Rot180(k_x) = (uhat - u) \bigotimes -k_x
dv_chi/dchi = (vhat - v) \bigotimes Rot180(k_y) = (vhat - v) \bigotimes -k_y
du_psi/dpsi = (uhat - u) \bigotimes k_x
dv_psi/dpsi = (vhat - v) \bigotimes Rot180(k_x) = (vhat - v) \bigotimes -k_x
Add optional regularization term:
J = (uhat - u)**2 + (vhat - v)**2 + lambda*(chi**2 + psi**2)
'''
from scipy import integrate
import numpy
from scipy import optimize
from scipy.signal import fftconvolve
def uRecon(sf,vp,kernel_x,kernel_y):
uchi=fftconvolve(vp,-kernel_x,mode='valid')
upsi=fftconvolve(sf,kernel_y,mode='valid')
return upsi+uchi
def vRecon(sf,vp,kernel_x,kernel_y):
vchi=fftconvolve(vp,-kernel_y,mode='valid')
vpsi=fftconvolve(sf,-kernel_x,mode='valid')
return vpsi+vchi
def costFunc(params,u,v,kernel_x,kernel_y,pad_shape,lam):
pp=params.reshape(pad_shape)
sf=pp[0]
vp=pp[1]
uhat=uRecon(sf,vp,kernel_x,kernel_y)
vhat=vRecon(sf,vp,kernel_x,kernel_y)
j=(uhat-u)**2+(vhat-v)**2
j=j.mean()
j+=lam*numpy.mean(params**2)
return j
def jac(params,u,v,kernel_x,kernel_y,pad_shape,lam):
pp=params.reshape(pad_shape)
sf=pp[0]
vp=pp[1]
uhat=uRecon(sf,vp,kernel_x,kernel_y)
vhat=vRecon(sf,vp,kernel_x,kernel_y)
du=uhat-u
dv=vhat-v
dvp_u=fftconvolve(du,kernel_x,mode='full')
dvp_v=fftconvolve(dv,kernel_y,mode='full')
dsf_u=fftconvolve(du,-kernel_y,mode='full')
dsf_v=fftconvolve(dv,kernel_x,mode='full')
dsf=dsf_u+dsf_v
dvp=dvp_u+dvp_v
re=numpy.vstack([dsf[None,:,:,],dvp[None,:,:]])
re=re.reshape(params.shape)
re=re+lam*params/u.size
#re=re+lam*params
return re
# make some data
y=numpy.linspace(0,10,40)
x=numpy.linspace(0,10,50)
X,Y=numpy.meshgrid(x,y)
dx=x[1]-x[0]
dy=y[1]-y[0]
# create convolution kernel
kernel_x=numpy.array([[-0.5, 0.5],[-0.5, 0.5]])/dx
kernel_y=numpy.array([[-0.5, -0.5],[0.5, 0.5]])/dy
# make a velocity field
u=3*Y**2-3*X**2+Y
v=6*X*Y+X
# integrate to make an intial guess
intx=integrate.cumtrapz(v,X,axis=1,initial=0)[0]
inty=integrate.cumtrapz(u,Y,axis=0,initial=0)
psi1=intx-inty
intx=integrate.cumtrapz(v,X,axis=1,initial=0)
inty=integrate.cumtrapz(u,Y,axis=0,initial=0)[:,0][:,None]
psi2=intx-inty
psi=0.5*(psi1+psi2)
intx=integrate.cumtrapz(u,X,axis=1,initial=0)[0]
inty=integrate.cumtrapz(v,Y,axis=0,initial=0)
chi1=intx+inty
intx=integrate.cumtrapz(u,X,axis=1,initial=0)
inty=integrate.cumtrapz(v,Y,axis=0,initial=0)[:,0][:,None]
chi2=intx+inty
chi=0.5*(chi1+chi2)
# pad to add 1 row/column
sf=numpy.pad(psi,(1,0),'edge')
vp=numpy.pad(chi,(1,0),'edge')
params=numpy.vstack([sf[None,:,:], vp[None,:,:]])
# optimize
pad_shape=params.shape
lam=0.001 # regularization parameter
opt=optimize.minimize(costFunc,params,
args=(u,v,kernel_x,kernel_y,pad_shape,lam),
method='Newton-CG',
jac=jac)
params=opt.x.reshape(pad_shape)
sf=params[0]
vp=params[1]
uhat=uRecon(sf,vp,kernel_x,kernel_y)
vhat=vRecon(sf,vp,kernel_x,kernel_y)
Some additional notes:
As the convolution is using a tiny kernel (2x2), a special form of convolution may be faster than fftconvolve, see a comparison here, and here.
When the grid is uneven (e.g. wind data on a Gaussian grid), will have to deal with the non-uniform grid sizes. I've come up with a script that computes using wind data in netcdf format (via the CDAT module), see here. Feedbacks are welcome.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With