Oh yeah, 380 fps transcoding (via intel Quicksync)

My cameras ‘output’ footage in h264 using a high bitrate. I regularly transcode this to insane high bitrate mpeg2. For this I used ffmpeg, which on my i5-8500 was capable of transcoding at approx 190 fps (with a near 100% CPU usage of course…)
I use an nvidia card in my desktop, so the 8500 builtin GPU was disabled so far.
I switched it on just to see, if I can use it for transcoding? Shame on me I did not try it sooner…
Thq qsv transcode sometimes hits 400fps, it’s speed is varying (I guess read/write speed of disk?), but it’s constantly above 365 fps, seems to average around 380 fps.
So the magic:
ffmpeg -hwaccel qsv -c:v h264_qsv -i input.file.MTS -b:v 50000K -minrate 50000K -maxrate 50000K -c:v mpeg2_qsv -c:a pcm_s16le -ar 48000 outputfile.mov -y

That results an mpeg2 encoded 50Mbps .mov file from the h264 encoded .MTS fullHD input. I consider this speed quite acceptable :sunglasses:

Of course I could use any input/output codec which is qsv supported, the list is possible to query:

ffmpeg  -hide_banner -codecs  | grep qsv
 DEV.L. av1                  Alliance for Open Media AV1 (decoders: libdav1d libaom-av1 av1 av1_cuvid av1_qsv ) (encoders: libaom-av1 libsvtav1 )
 DEV.LS h264                 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m h264_qsv libopenh264 h264_cuvid ) (encoders: libx264 libx264rgb libopenh264 h264_amf h264_nvenc h264_qsv h264_v4l2m2m h264_vaapi nvenc nvenc_h264 )
 DEV.L. hevc                 H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc hevc_qsv hevc_v4l2m2m hevc_cuvid ) (encoders: libx265 nvenc_hevc hevc_amf hevc_nvenc hevc_qsv hevc_v4l2m2m hevc_vaapi libkvazaar )
 DEVIL. mjpeg                Motion JPEG (decoders: mjpeg mjpeg_cuvid mjpeg_qsv ) (encoders: mjpeg mjpeg_qsv mjpeg_vaapi )
 DEV.L. mpeg2video           MPEG-2 video (decoders: mpeg2video mpegvideo mpeg2_v4l2m2m mpeg2_qsv mpeg2_cuvid ) (encoders: mpeg2video mpeg2_qsv mpeg2_vaapi )
 D.V.L. vc1                  SMPTE VC-1 (decoders: vc1 vc1_qsv vc1_v4l2m2m vc1_cuvid )
 DEV.L. vp8                  On2 VP8 (decoders: vp8 vp8_v4l2m2m libvpx vp8_cuvid vp8_qsv ) (encoders: libvpx vp8_v4l2m2m vp8_vaapi )
 DEV.L. vp9                  Google VP9 (decoders: vp9 vp9_v4l2m2m libvpx-vp9 vp9_cuvid vp9_qsv ) (encoders: libvpx-vp9 vp9_vaapi vp9_qsv )

Oh, yeah, 380fps :grin:

1 Like

This is all new hi-tech stuff to me.
How do you turn a GPU on? Why is it not on all the time? Do you need software to use it?
What determines whether a given app uses it?

1 Like

In this case I just enabled it in BIOS.

Because I intended to use the nvidia only, and I disabled it.

Of course. In this case in my system the kernel loads the driver for it, but X is not configure to use it, and I don’t have any monitor attached to it. I guess, I could, and I could attach a monitor. But I won’t do that, as 2 monitors on my nvidia is enough, they fulfill my needs, and fill my desk :wink:
So it’s just another video card present in my system, used for nothing, except hw_accelerated video coding.
Don’t confuse this thing with laptops hybrid graphics, where the thing is more complicated.

In my case, ffmpeg uses qsv (Quicksync video) hardware acceleration because I explicitly told it to use it. Nothing else uses the intel video in my system.
Note that I had to install intel-media-va-driver-non-free to be able to use intels hw assisted video encoding.

My current laptop is a Dell G3, which has i5-8300, and a GTX1050. This is a hybrid (optimus) setup, here I use mainly the integrated intel video, X is configured to use it, but not configured to use the nvidia. Here I have the monitor (the laptop lid) attached to intel graphics, I could use the nvidia too somehow with a prime display. I don’t care though, as I don’t want to do gaming. I can happily use Cuda and NvEnc on the nvidia without having xorg configured a display on it (just like on my desktop to use qsv on intel without having a monitor on intel graphics).
Maybe one day I get my courage and will try to setup a real hybrid graphics system on my laptop, I guess I could use bumblebee to determine wether an app runs on the nvidia or on the integrated. I’m not very interested at moment to do this however…
This is the “knowledge”: NVIDIA Optimus - Debian Wiki

1 Like

I had something similar a few years back, an Asus ROG UX501 (???) - 8 GB hardwired RAM (not upgradeable) 4 core 4 thread i7, 256 GB SSD (not upgradeable) - with an NVideo GTX970Ti mobile - nice slim “svelt” design (like a MacBook) - circa 2015… I mostly ran elementary OS on there (and paid to use it) and “prime” to switch between intel and NVidia GPU - it worked, it did require logging out of X and logging back in again, but it worked… it took me weeks and weeks of trial and error to get prime working, but got there…
I kinda wish I still had that laptop - it was a beast… But I lost my job in early 2017 and with no sign of anything concrete on the horizon, I had to sell it (which meant re-installing Windows 10, after I’d wiped the restore partition) for about 1/3 the price I paid for it - VERY reluctant sale…

But never mind - quite happy with my Thinkpad E495 running Pop! OS… I just wish I hadn’t trashed the swap partition… I might re-install and choose custom layout for disk - 'cause I HATE being hamstrung by a swap partition (note - with SSD it makes ZERO different where on disk, swap sits)…

I think I might grab some diags and screenshots of my issue (takes a VERY long time - i.e. “too long”, after unlocking my LUKS “/”) and kickoff a new thread / topic…

1 Like

This is not a beast, but a decent performer to me… not as powerful as my desktop, but has the power to run Davinci Resolve (thanks to Cuda…).
Qsv hw transcoding works here too, less performant however. “Only” 280 fps with qsv, the software-only transcode runs at approx 180 fps (which is not that bad too).

Sad to read… :frowning:

The tragedy doesn’t end like a Greek one, or Shakespeare… no stabbings or poisoned figs, or chalices…

Took a while to recover… it seemed my WHOLE CAREER was being converted into the JD (job description) for a uni graduate with 25 years of DevOPS experience :smiley: and 55 years of python development…

Eventually got my foot back in the door, of a mob doing “old school infrastructure” that needed UNIX and Linux boffins… that was 2018… aint looked back since…

I’ve seen a resurgence in demand for my skillset since Covid…


I am interested for an entirely different reason. I am thinking about using CUDA or ROCm to program a GPU for a particular numerical calculation that uses a lot of matrix arithmetic.
Can I use the GPU on my video card, or do I need a second uncommitted GPU?

I don’t know how to program such things, but knowing Davinci Resolve heavily utilizes Cuda:

  • it runs well on my desktop, where the nvidia is the primary (so far the only) GPU.

  • It also runs OK on my laptop, where nvidia is “invisible”, just the drivers are present.

so I’d say you don’t need an uncommitted GPU. For using Cuda you need the proprietary drivers.

1 Like