import numpy as np
np.random.seed(1)
import rpy2.robjects as ro
import time
import threading
filename='your/filename'
path='specify/path'
output_location='specify/output/location'
def function1(arg1,arg2):
r=ro.r
r.source(path+"analysis.R") #invoking R program
p=r.analysis(arg1,arg2) #calling the R function
return p
threads=[]
for i in range(100):
t1=threading.Thread(target=function1,name='thread{}'.format(i),args=(arg1,arg2))
t1.start()
threads.append(t1)
print('{} has started \n'.format(t1.name))
for i in threads: # to know threads used
i.join()
rpy2 is for invoking my R code through python. Now it uses 100 threads. Does this multithreding process uses GPU? I think currently it is running on CPU.That could be identified from the system monitor. But if i am using 1000 threads, then it uses CPU or GPU?
Does all mutithreading program runs on GPU?
Of course not. Some computer don't have any GPUs (think of the Linux server inside a datacenter running some websites, or of some laptops). And even on those having a GPU, a multithreaded program won't use it (by magic), unless that program was specifically coded for that GPU. For example, many web server or database server programs are multithreaded but don't use the GPU (and are incapable of using it).
Concretely, a GPU needs a specialized code to run (which is not the same as the machine code running on the CPU; the instruction set in different!). Practically speaking, you need that code to be written in OpenCL or CUDA or SPIR to be able to run on the GPU. And the programming model is different (so writing an OpenCL or CUDA kernel is difficult, and designing a software to take adavantage of GPUs is not always possible and can take months or even years of development work). Only some few kinds of problems (essentially, vector problems) and programs can profit of the GPU (and you'll spend a lot of efforts to rewrite them for it). And the OpenCL or CUDA code won't be very portable (you'll need to rewrite parts of it, or tune it differently, when moving your application from one GPU to another model of GPU).
But a C code using pthreads (on Linux) can -and usually does- run on several cores (be aware of processor affinity and of NUMA). All the cores in typical microprocessors have the same instruction set architecture and can run the same machine code. On Linux, the system call to create a thread is (at the lowest level, in user space) clone(2) (in practice it is directly used only in pthreads(7)). In practice, you'll better have few runnable threads (perhaps only a dozen) in your process. A thread is quite "heavy".
Of course, multithreading is not much related to Python (which has a GIL which makes genuine Python multithreading impossible).
Parallel computing is much harder than what you think! Parallelisation of programs is never magical, and requires a lot of hard work and skills. Numpy is trying to hide and abstract that complexity for you (e.g. by internally running code written in OpenCL or CUDA on your GPU), but that is not always possible.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With