If I take a 25MB/190,000 line text file and dump it into a text widget the process finishes quickly but I still see python.exe using 50%CPU for another minute or so afterwards. The bigger the file, the longer it takes for the CPU to drop off to 0% usage. When I start loading multiple files into different text widgets they load into the widget instantly but the CPU stays at 50% and the GUI runs horribly slow until it finishes whatever its doing in the backend. Can someone explain to me why the CPU is still being used and why its impacting performance if the text is already in the widget? What is it needing to do? Any way around this?
from tkinter import *
file = r"C:\path\to\large\file.txt"
def doit():
with open(file, 'r') as f:
txt.insert('end', ''.join(f))
f.close()
main = Tk()
txt = Text(main)
txt.grid(row=0)
btn = Button(main, text="click here", command=doit)
btn.grid(row=1, columnspan=2)
main.mainloop()
I thought maybe it was because im handling the file line by line instead of loading the entire file into RAM. I tried readlines() but I get the same results.
Most likely it is calculating where the line breaks go, for lines past the area currently visible on the screen. With the numbers you gave, the lines average over 130 characters long; if some of them are considerably over that, this is a situation that Tkinter is known to be slow in.
You could perhaps turn off word-wrap (by configuring the Text with wrap=NONE), which would probably require that you add a horizontal scroll bar. If that is unacceptable, it's possible that adding newlines of your own could help, if there's some natural point at which to insert them in your data.
Note that ''.join(f) is a rather inefficient way to read the entire file into a single string - just use f.read() for that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With