My cameras ‘output’ footage in h264 using a high bitrate. I regularly transcode this to insane high bitrate mpeg2. For this I used ffmpeg, which on my i5-8500 was capable of transcoding at approx 190 fps (with a near 100% CPU usage of course…)
I use an nvidia card in my desktop, so the 8500 builtin GPU was disabled so far.
I switched it on just to see, if I can use it for transcoding? Shame on me I did not try it sooner…
Thq qsv transcode sometimes hits 400fps, it’s speed is varying (I guess read/write speed of disk?), but it’s constantly above 365 fps, seems to average around 380 fps.
So the magic: ffmpeg -hwaccel qsv -c:v h264_qsv -i input.file.MTS -b:v 50000K -minrate 50000K -maxrate 50000K -c:v mpeg2_qsv -c:a pcm_s16le -ar 48000 outputfile.mov -y
That results an mpeg2 encoded 50Mbps .mov file from the h264 encoded .MTS fullHD input. I consider this speed quite acceptable
Of course I could use any input/output codec which is qsv supported, the list is possible to query:
This is all new hi-tech stuff to me.
How do you turn a GPU on? Why is it not on all the time? Do you need software to use it?
What determines whether a given app uses it?
Because I intended to use the nvidia only, and I disabled it.
Of course. In this case in my system the kernel loads the driver for it, but X is not configure to use it, and I don’t have any monitor attached to it. I guess, I could, and I could attach a monitor. But I won’t do that, as 2 monitors on my nvidia is enough, they fulfill my needs, and fill my desk
So it’s just another video card present in my system, used for nothing, except hw_accelerated video coding.
Don’t confuse this thing with laptops hybrid graphics, where the thing is more complicated.
In my case, ffmpeg uses qsv (Quicksync video) hardware acceleration because I explicitly told it to use it. Nothing else uses the intel video in my system.
Note that I had to install intel-media-va-driver-non-free to be able to use intels hw assisted video encoding.
My current laptop is a Dell G3, which has i5-8300, and a GTX1050. This is a hybrid (optimus) setup, here I use mainly the integrated intel video, X is configured to use it, but not configured to use the nvidia. Here I have the monitor (the laptop lid) attached to intel graphics, I could use the nvidia too somehow with a prime display. I don’t care though, as I don’t want to do gaming. I can happily use Cuda and NvEnc on the nvidia without having xorg configured a display on it (just like on my desktop to use qsv on intel without having a monitor on intel graphics).
Maybe one day I get my courage and will try to setup a real hybrid graphics system on my laptop, I guess I could use bumblebee to determine wether an app runs on the nvidia or on the integrated. I’m not very interested at moment to do this however…
This is the “knowledge”: NVIDIA Optimus - Debian Wiki
I had something similar a few years back, an Asus ROG UX501 (???) - 8 GB hardwired RAM (not upgradeable) 4 core 4 thread i7, 256 GB SSD (not upgradeable) - with an NVideo GTX970Ti mobile - nice slim “svelt” design (like a MacBook) - circa 2015… I mostly ran elementary OS on there (and paid to use it) and “prime” to switch between intel and NVidia GPU - it worked, it did require logging out of X and logging back in again, but it worked… it took me weeks and weeks of trial and error to get prime working, but got there…
I kinda wish I still had that laptop - it was a beast… But I lost my job in early 2017 and with no sign of anything concrete on the horizon, I had to sell it (which meant re-installing Windows 10, after I’d wiped the restore partition) for about 1/3 the price I paid for it - VERY reluctant sale…
But never mind - quite happy with my Thinkpad E495 running Pop! OS… I just wish I hadn’t trashed the swap partition… I might re-install and choose custom layout for disk - 'cause I HATE being hamstrung by a swap partition (note - with SSD it makes ZERO different where on disk, swap sits)…
I think I might grab some diags and screenshots of my issue (takes a VERY long time - i.e. “too long”, after unlocking my LUKS “/”) and kickoff a new thread / topic…
This is not a beast, but a decent performer to me… not as powerful as my desktop, but has the power to run Davinci Resolve (thanks to Cuda…).
Qsv hw transcoding works here too, less performant however. “Only” 280 fps with qsv, the software-only transcode runs at approx 180 fps (which is not that bad too).
The tragedy doesn’t end like a Greek one, or Shakespeare… no stabbings or poisoned figs, or chalices…
Took a while to recover… it seemed my WHOLE CAREER was being converted into the JD (job description) for a uni graduate with 25 years of DevOPS experience and 55 years of python development…
Eventually got my foot back in the door, of a mob doing “old school infrastructure” that needed UNIX and Linux boffins… that was 2018… aint looked back since…
I’ve seen a resurgence in demand for my skillset since Covid…
I am interested for an entirely different reason. I am thinking about using CUDA or ROCm to program a GPU for a particular numerical calculation that uses a lot of matrix arithmetic.
Can I use the GPU on my video card, or do I need a second uncommitted GPU?