Faster H.265/HEVC Video Encoding with Nvidia GTX960 GPU and ffmpeg

H.265 promises the same video quality as H.264 when using half the bitrate, so you may have thought about converting your H.264 videos to H.265/HEVC in order to reduce the space used by your videos. However, if you’ve ever tried to transcoding videos with tools such as HandBrake, you’ll know the process can be painfully slow, and a single movie may take several hours even with a machine with a power processor. However, there’s a better and fster solution thanks to hardware accelerated encoding available in some Intel and Nvidia graphics cards. For this purpose, GearBest sent me Maxsun MS-GTX960 graphics card, a second generation Maxwell GPU, that supports H.265 accelerated video encoding and promised up to 500 fps video encoding. So I’ve put the graphics card to the test in a computer running Ubuntu 14.04, and reports some of my findings here. Similar instructions can also be followed in Windows.

In order to leverage Nvidia Maxwell 2 GPU capabilities you’ll need to download and install Nvidia Video Codec SDK. The latest version (6.0.1) requires Nvidia Drivers 358.xx or greater, and my system had version 352.xx, so I followed some instructions to install the latest drivers in Ubuntu 14.04.

Upon restart I had the latest 358.16 drivers installed.


Somehow the fonts were very small right after installation as xorg.conf was missing, so I recreated with the command:

Then I adjust the font sizes further with Unity Tweak Tool.

The next step is to download and extract into a working directory:

The instructions in the Readme simply tell you to go to Samples directory, and type make in order to build the samples, but I had to do a few more steps:

I also had to modify Samples/NvTranscoder/Makefile to replace := by += in front of LDFLAGS.

and finally I could successfully build the samples:

There are several samples in the SDK: NvEncoder, NvEncoderCudaInterop, NvEncoderD3DInterop, NvEncoderLowLatency, NvEncoderPerf, NvTranscoder, NvDecodeD3D9, and NvDecodeGL. For the purpose of this post I used NvTranscoder to convert H.264 video to H.265 using the GPU.

At first I had some issues with the error:

I followed a workaround provided on Blender, and it did not work at first, but after using NvTranscoder with sudo once, I could use the tool as a normal user thereafter.

Here’s the output to transcode a H.264 1080p video with High Quality preset.

The video lasts 2 minutes 43 seconds (4901 frames in total), and encoding was done in about 32 seconds meaning about 5 times faster than real-time, and at 156.5 fps on average.

I repeated the same test by with High Performance preset.

Decoding took around 24 seconds at 205 fps. It looked pretty good, but I tried the same test with HandBrake using H.265 with RF quality set to 25, and it took 4 minutes and 30 seconds to encode the video, or about 9 times slower than with the GPU. For reference, my computer is based on an AMD FX8350 octa-core processor clocked at 4.0 GHz.

But then I tried to play the video, and I could not find any tool to play them, and NvTranscode  appears to generate raw H.265 video data, so as I did not want to write my own little program, I found that ffmpeg also support nvenc, but just not by default, and you have to compile it yourself.

There are instructions to build ffmpeg with nvenc in Ubuntu 15.10, but they did not work on Ubuntu 14.04 so I mixed those with ffmpeg Ubuntu compilation guide to build it for my computer.

First we’ll need to install some dependencies and create a working directory:

You’ll also need to download and install/compile some extra packages depending on the codecs we want to enable. I’ll skip H.264 and H.265 since this will be handled by Nvidia GPU instead, and will enable AAC and MP3 audio encoders, VP8/VP9 and XviD video decoders and encoders, and libopus decoder and encoder as explained in the building guide:

Now I’ll download and extract ffmpeg snapshot (January 3, 2016) and copy the required NVENC 6.0 SDK header files into /usr/local/include:

Before configuring and building ffmpeg with nvenc enabled:

You can also optionally install it (which I did):

This will install it in $HOME/bin/ffmpeg. Now we can verify nvenc support for H.264 and H.265 is enabled:

Perfect. Time for a test with our 1080p H.264 video sample, and encoding at 2000 kbps.

It took 30 seconds, or about the same time as with NvTranscode, but this time I had a watchable video with audio, and I could not notice any visual quality degradation.

I repeated the test with a H.264 1080p movie lasting 1 hour 57 minutes 29 seconds. The movie H.264 stream was encoded at 2150 kbps, so to decrease the file size by half I encoded the movie at 1075 kbps (-b:v 1075k option). The encoding only took 13 minutes and 12 seconds, or about 9 times faster real-time at 218 fps.

I also checked some GPU details during the transcoding:

This shows for example that it does not maxes out the GPU power consumption (P2 mode: 33 Watts). My processor load was however a bit higher than expected, although not at 100% all the time as would have been the case for software video transcoding.

CPU Usage during GPU Accelerated Video Transcoding
CPU Usage during Nvidia GPU Accelerated Video Transcoding

Beside saving time, transcoding videos with a GPU graphics should also reduce your electricity bill. How much exactly will depend on your video library size, electricity rate, and overall computer power consumption.

While the original file size was 2.0GB, the H.265 video was only 985 MB large, and video quality appeared to be very close to the one of the original video.

Finally, I transcoded a 4K H.264 video @ 30 fps (big_buck_bunny_4k_H264_30fps.mp4) at slightly less half bitrate (3500 kbps for H.265 vs 7480 kbps for H.264) and it took 6 minutes and 56 seconds to encode the 10 minutes 30 seconds video. While checking quality the main problem was my computer struggled to cope with the H.265 4K video when using Totem and VLC video players with lots of artifacts at times, and sound cuts, but the videos played just fine with ffplay and Kodi.

I’d like to thanks GearBest for providing Maxsun MS-GTX960 graphics card selling for $240.04 on their website.

Leave a Reply

53 Comments on "Faster H.265/HEVC Video Encoding with Nvidia GTX960 GPU and ffmpeg"

newest oldest most voted
Notify of

a little bit disappointed with the HEVC encoding speed for a GPU encoder, but we can’t complain much for the moment, it we done it on the CPU, it would take ages
otherwise I would like to see how the nvidia encoder is standing in front of android HW accelarated encoding


HELIO X10 (xiaomi redmi note 3, letv s1…) is supporting it ([email protected] HEVC encoding) and maybe some snapdragons but I’m not sure, the real problems are the drivers and software support


And.. for AMD????????????? 🙁

Andrew P

If the encoder generates raw video, ffmpeg or avconv should be able to insert that data stream into a container of your choice (probably mp4) without any trouble, and without having to recompile. Or there should be plenty of command lines tools to mux/demux data streams into a container.


H.265 might promise the same video quality as H.264 when using half the bitrate, but it doesn’t promise transcoding from H.264 keeps the same quality, doesn’t it?

Bernd Peters

looks like there are two small errors. I replaced libXlut by libxmu-dev and removed the space from export LDFLAGS=”-L/usr/lib/nvidia-358/”. I tried to convert a h264 mkv to h265 and at first it looked good but after some time the audio is lost in the h265 file. I used VLC for replay. Any ideas ?


hi very thx abou this tuto
i have Geforce 9400 GT && ubuntu 15.10:32BIT && intel doual E8600 3.2 Ghz
this tuto will work for me ?


Nvidia has upgraded the NVENC HEVC hardware encoder to support 10bit hardware encoding in Pascal GPU family.


Just dropping this bit of info here… A GTX960 can actually support two nvenc sessions simultaneously. This means you can run two ffmpeg processes in parallel and encode two videos at full speed, thereby doubling the theoretical encoding speed. An example:

$ find Videos/ -type f -name \*.avi -print | sed ‘s/.avi$//’ | xargs -n 1 [email protected] -P 2 ffmpeg -i “@.avi” -c:a aac -c:v hevc_nvenc “@.mp4”

This will transcode an entire directory with .avi files into HEVC/H.265+AAC with the ending .mp4, using Nvidia nvenc and running two ffmpeg processes in parallel (i.e. 200fp/s+200fp/s).

I am not sure why they allow for this. Maybe it was easier to get more encoding performance out of the silicon this way and by implementing two encoding engines instead of one, single but highly optimized one.

Nvidia Quadro cards are said to handle more than 2 sessions at a time. So those should be of most interest to anyone who wants to encode several video streams in parallel. Still, two sessions with a GTX960 is quite nice.


Some extra performance can be had with the “scale_npp” module, which uses Nvidia CUDA for scaling image sizes. It’s however a bit tricky to use:

ffmpeg -i input.avi \
-c:a aac -b:a 128k \
-filter “hwupload_cuda,scale_npp=w=852:h=480:format=nv12:interp_algo=lanczos,hwdownload,format=nv12” \
-c:v hevc_nvenc -b:v 1024k \
-y output.mp4

It actually requires three filter modules. The module “hwupload_cuda” sends frames to the CUDA engine where then “scale_npp” applies scaling (here with the lanczos algorithm) and finally a third module is needed to download the frame. The format of the image also has to be specified in this process (yuv420p also works as format).

One can get more info about it with “ffmpeg -h encoder=hevc_nvenc” and “ffmpeg -h filter=scale_npp”.

I was originally using just “-s hd480” instead of “-filter …” to scale the video and got about 200 fp/s for each encoding process. Now it’s around 360fp/s for each using CUDA.

One can check on the GPU’s utilisation with:

$ nvidia-smi dmon
# gpu pwr temp sm mem enc dec mclk pclk
# Idx W C % % % % MHz MHz
0 53 45 57 7 76 0 3510 1404
0 51 46 55 6 76 0 3510 1404
0 51 45 55 6 73 0 3510 1404

The encoder is busy to about 75% with two ffmpeg processes. My actual CPU, an AMD FX 8350, is at around 60% load.

I am using this to scale videos down so I can upload them onto mobile phones and tablets. The version of ffmpeg is 3.1.1 and the nvidia video sdk is 7.0.1.


You’re welcome. When I started looking into HEVC/H.265 encoding was I using my distribution’s default ffmpeg and encoding speed was at around 10-20 fp/s. It took a while to figure out all the bits’n’pieces to make it work. So now I am at 720 fp/s (2×360 fp/s). While I had to pick up all the info from various blogs and other places did your blog provided some good insights and I just want to give some of it back now. 🙂


Very very nice article, it gives me know the opportunity to follow this and use the max out of the GTX960.
As I am a broadcast engineer and not a developer I am having a hard time testing out all this.

However my question is: Can I use the ffmpeg and NVenc solution above to encode live content (coming from eg Decklink card supported by FFmpeg) and stream it out on a UDP TS ?


You definitely should be able to do this. The Nvidia encoder NVENC has two special settings for low-latency streaming.

I’ve been playing around with the Nvidia options and could improve on quality and performance. Turns out that while the lanczos filter is good to preserve details (“interp_algo=lanczos”) does it lower the quality of shaded surfaces. So its basically a tradeoff. I got best results from the super sampling filter (“interp_algo=super”). It appears that super sampling keeps the quality steady frame by frame, which allows the encoder to do the best job and results in a noticeably higher compression (file size are reduced for the same quality setting). This in return lets one increase the quality of the encoding itself and makes for a better picture when compared at equal bitrates.

Another improvement for me was to switch from the builtin aac audio encoder to libopus. Opus seems to have the best audio quality, but it is also rather new. It’s said to beat all others. This was confirmed with several blind tests and ffmpeg also suggests it:

Libopus also makes heavy use of the latest processor features and has a very low latency and a high speed.

I good also figure out how NVENC performs 2-pass encoding. It works quite differently from what one is used to with ffmpeg. Usally it means to make two full passes over the source data. NVENC does it differently and more partially. It looks ahead up to 32 frames and uses the information to decide how many bits to use for VBR. So it is technically far from true two-pass encoding, but still makes for a fast encoding while giving good results. My script currently looks like this:

# Scale down image using Nvidia Performance Primitives (CUDA NPP).
filter=”-pix_fmt +nv12 -filter:v”
filter+=” hwupload_cuda,”

# Encode with NVENC partial 2-pass encoding with variable bitrate
video=”-qmin:v 0.0 -qmax:v 24.0 -b:v 896k”
video+=” -preset:v slow -profile:v main”
video+=” -level:v 6.2 -tier:v high -rc:v vbr_2pass -rc-lookahead:v 32″
video+=” -c:v hevc_nvenc”

# Encode audio with Opus (stereo 96k).
audio=”-ac 2 -b:a 96k -c:a libopus”

meta=”-map_metadata -1″
ofmt=”-f matroska”

exec ffmpeg -v info $decode -i “$in” \
$filter $video $audio $subtl $meta $ofmt -y “$out”

To do streaming will you just need to select the right output format. Here I use matroska and write it out to a file. You should however be able to use any streaming format as long as you have compiled it into ffmpeg and of course it needs to be capable of containing the video and audio formats. See “ffmpeg -formats” for a list of formats.

Instead of “-rc:v vbr_2pass” might you want to try “-rc:v ll_2pass_quality”, which is the setting of NVENC for low-latency encoding.

Good luck!


@Coremans Regarding streaming… I used ffmpeg many years ago to stream to Justin.TV (I believe they now call themselves Twitch.TV). What worked for me back then with ffmpeg on Linux was to use “-f flv” as the format with a file name of “-y rtmp://…”

You will have to look into what your receiving end is exactly capable of supporting. If it can support different input formats then try them all. See which works best.


Thank you guys for all the valuable information. The idea is to find the optimal encoding solution for UHD.
If I see how hard it is for the GTX960 to decode HEVC UHD, I think I will start off with H264.

The goal is to get the live stream encoded out again via an Mpeg-TS stream in ASI or UDP so it can go into a satellite modulator. FFmpeg can do it out of the box but by implementing the NVenc, I expect this to be even better and to offload the CPU cores.

Nice project and this place is apparently a good start to get launched 🙂


One more question: We have seen Encode and Transcode in this article, is there anything to be found for NVdecode ?
If I could implement this on ffplay then it could make my HEVC decoding more performant, not ?

Today I got a satellite feed converted to UDP (56Mbit/s) and have it decoded with VDPAU, the 4 cores went to 83%
the nvidia-smi reported 41% GPU usage. (content was HEVC in MPEG2Ts with UHD resolution)
Not bad I think, but if the GPU is using 41%, then there is some headroom #JustThinking


@Coremans The main problem you are facing isn’t so much with the GPU but with the CPU. Whenever the data has to be moved into main memory does this present a bottleneck. Decoding of H.264 with VDPAU seems pretty wide-spread these days. However, VLC and ffmpeg seem not to support H.265 decoding with VDPAU at this time. However the mpv video player already does support it (see mpv plays an U-HD video with 3840×2160 pixels on my computer with the CPU at about 40%. I only have a 1920×1080 display and so it also needs to perform some scaling, too, which probably explains the CPU load. The CPU is an AMD FX 8350 (8 cores at 4 GHz).

Still, you could get lucky with ffmpeg, because it not only supports VDPAU, but also the CUVID interface. The CUVID decoders are used exclusively for direct transcoding on the GPU and cannot have any filters applied, meaning, you cannot scale the image or otherwise manipulate it currently (might become possible in the future though). Here is an example with ffmpeg:

ffmpeg -v:c vp9_cuvid -i U-HD-2160p.webm -v:c hevc_nvenc -y out-2160p.mkv

This will transcode an U-HD video in the WebM format, and assuming it is VP9-encoded, into a H.265/HEVC-encoded video in the matroska format. I’ve just tested it and it did this with a frame rate of 52 fp/s. This is at the default settings of ffmpeg just as the line above shows it. The GPU’s encoder is running at 100%, the decoder at 32% and the CPU is at an astonishing 5% load! So the transcoding happens purely in hardware, decoding and encoding, and with no frames going to the CPU our main memory in between. The CPU only needs to pack the final data into the matroska format.

The CUVID decoders are called “h264_cuvid”, “h265_cuvid”, “vc1_cuvid”, “vp8_cuvid” and “vp9_cuvid”. The NVENC encoders are called “h264_nvenc” and “hevc_nvenc”.

With a bit of overclocking might one even get it to run at 60fp/s, but for U-HD at 60Hz should you probably just get a GeForce GTX 1060. These are pretty cheap, too, and support not just the Main profile of H.265/HEVC, but the entire GTX 10×0 series can handle the Main, Baseline and High profiles as well as 8- and 10-bit colour depth. Not to mention a few extras for U-HD and being simply faster.


I’m only getting around 2x realtime doing a 1080p encode using nvenc_hevc on a 1060 (3 gig) using latest snapshot of ffmpeg. nvidia-smi shows about 200m of memory use, about 3% on gpu-util and does show ffmpeg process. Driver is 367.35. ffmpeg is hovering around 97% in top command. This is on an i5 cpu (2500k I think). Any idea why so slow?


@Frank What does your ffmpeg line look like and what do you use as input?



~/ffmpeg/bin/ffmpeg -i Test.mkv -vcodec hevc_nvenc output.mp4

Some details about the source:

Format : Matroska
Format version : Version 2
File size : 20.9 GiB
Duration : 1h 42mn
Overall bit rate : 29.6 Mbps
Writing application : MakeMKV v1.10.0 win(x64-release)
Writing library : libmakemkv v1.10.0 (1.3.3/1.4.4) win(x64-release)
Format : VC-1
Format profile : [email protected]
Codec ID/Hint : Microsoft
Bit rate : 24.4 Mbps
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 23.976 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.491



It does seem that way.. but then why does the task show up in the nvidia-smi utility? Dirver issue? I noticed in the driver notes it said something like “added support for 1060 6gig” could that mean the 3gig maybe isn’t supported completely yet?


@Frank The problem may be with the decoder. Your test file is HD1080 with a 24mbit/s bit rate. This is a lot for a software decoder to decode.

You could try the cuvid decoder, which is a hardware decoder only meant for transcoding. It won’t allow for any filters currently and is not yet fully stable. So don’t expect it to work, but be happy if it does:

$ ffmpeg -c:v vc1_cuvid -i Test.mkv -c:v hevc_nvenc output.mp4

This decodes the VC1 source on your graphics card and sends it to the hardware decoder without going back over the CPU or your main memory. When it works should you only see about 5% CPU load while your card is busy with decoder and encoder.



I tried building ffmpeg with cuvid support (–enable-cuvid), but I’m doing something wrong…

~/ffmpeg/ffmpeg# PATH=”$HOME/ffmpeg/bin:$PATH” PKG_CONFIG_PATH=”$HOME/ffmpeg/build/lib/pkgconfig” ./configure –prefix=”$HOME/ffmpeg/build” –pkg-config-flags=”–static” –extra-cflags=”-I$HOME/ffmpeg/build/include” –extra-ldflags=”-L$HOME/ffmpeg/build/lib” –extra-cflags=”-I/usr/local/cuda-8.0/include” –extra-ldflags=”-L/usr/local/cuda-8.0/lib64/stubs” –bindir=”$HOME/ffmpeg/bin” –enable-gpl –enable-libass –enable-libfdk-aac –enable-libfreetype –enable-libmp3lame –enable-libopus –enable-libtheora –enable-libvorbis –enable-libvpx –enable-libx264 –enable-libx265 –enable-nonfree –enable-nvenc –enable-cuvid



This seems to work.. had to add “–enable-cuda”

PATH=”$HOME/ffmpeg/bin:$PATH” PKG_CONFIG_PATH=”$HOME/ffmpeg/build/lib/pkgconfig” ./configure –prefix=”$HOME/ffmpeg/build” –pkg-config-flags=”–static” –extra-cflags=”-I$HOME/ffmpeg/build/include” –extra-ldflags=”-L$HOME/ffmpeg/build/lib” –extra-cflags=”-I/usr/local/cuda-8.0/include” –extra-ldflags=”-L/usr/local/cuda-8.0/lib64/stubs” –bindir=”$HOME/ffmpeg/bin” –enable-gpl –enable-libass –enable-libfdk-aac –enable-libfreetype –enable-libmp3lame –enable-libopus –enable-libtheora –enable-libvorbis –enable-libvpx –enable-libx264 –enable-libx265 –enable-nonfree –enable-nvenc –enable-cuda –enable-cuvid



I’m getting a little over 5x realtime now! Cool!



Turns out I didn’t need the second “–extra-ldflags” and it seems faster now (getting 6x realtime). Cpu is still around low to mid 90% (only 1 core) and nivida-smi is showing around 17% on gpu. So there might be extra potential here?


@Frank Judging only by the configure arguments, you may be doing something wrong there. Group all –extra-cflags=”…” into one and do the same with –extra-ldflags=”…”. Here is how I’m doing this:

configure –prefix=$prefix –extra-cflags=”-I$prefix/include -I/usr/local/cuda/include -I/usr/local/Video_Codec_SDK_7.0.1/Samples/common/inc” –extra-ldflags=”-L$prefix/lib -L/usr/local/cuda/lib64″ …

I don’t know if ffmpeg’s configure script will concatenate multiple –extra flags into one or if they might over-write one another. So I’m putting them all into one argument each and it has worked for me so far. /usr/local/cuda is a symlink to cuda-8.0. I use the $prefix directory to install all the prerequisites (i.e. libx264, libx265, libopus, …) before I install ffmpeg itself in there. This makes it easier for ffmpeg’s configure script to find the parts it needs and also keeps all the relevant pieces together.


Didn’t seem to make much difference.. still getting about 5x encoding time


@ Frank, how did you compile ffmpeg with cuvid? I managed to compile it with nvenc and nvresize, but when I try it with cuvid it does not work – it stops with -enable-cuda (Unknown option “–enable-cuda”.
See ./configure –help for available options.
). Yet i have installed cuda-8 and SDK7. Do you know a good tutorial?
Thanks for helping


So has someone found the optimal settings (parameters) to get the most out of realtime encoding (from live source) using CBR ? I was wondering if the rc-lookahead and ME is actually doing something when using CBR.
FYI I’m using UltraHD at 30fps


A H.265 hardware encoder is still a good concept though. This is especially true with HSA etc (which will become more utilised in the future no doubt). Basically with HSA, you can use the best option for the processing of the encoding between the CPU and GPU without loss of quality (supposedly). Of course, bytecopy software would have to be cleverly written to make the best use of this.


@natsu It’s worth noting that this is 4K video, or 4x 1080p… so the equivalent of 40 minute 1080p video in h.265 in 6-7 minutes… that’s way impressive… I’m seeing 260-290fps for my 1080p recodes using nvenc.

For reference, I’m using staxrip, setting nvenc to vbr2 with bitrate settings (23/20/26, 1200, adaptive), a 42-44minute 1080p is generally around 800mb


Hi. When I compare the results with soft/CPU encoding, there is a noticable difference in quality of the resulting video.
I’m transcoding x264 video files to HEVC/h265.
I’m transcoding in Ubuntu 16.04 and I’m using this ffmpeg build:
My video card is a GTX 960, with nVidia drivers 378 installed.

CPU encoding: ffmpeg -i in_x264.mkv -c:s copy -c:v libx265 -preset medium -x265-params crf=19 out_hevc.mkv
nVidia HW encoding: ffmpeg -i in_x264.mkv -c:s copy -vcodec hevc_nvenc -preset medium -x265-params crf=19 “out_hevc.mkv

I tried with different settings for the nVidia method to increase the quality, but no luck.
Like :
“-c:s copy -vcodec hevc_nvenc -preset slow -qmin 15 -qmax 51”
“-c:s copy -vcodec hevc_nvenc -b:v 2500k -profile main10 -preset slow -rc vbr_2pass -2pass 1 -rc-lookahead 32 -spatial_aq 1 -refs 5”

Any ideas? Or are the cards build for high speed and medium quality transcoding?
The speed is very high (7.6x for hevc_nvenc, 0.81x for libx265), but the trade off is to high for me.


It apears that the hardware is still somehow “limited”. It might be better with the newer Pascal chips, but still some limitations there:

Maybe a silly question/thought: Should it be possible to implement a soft encoder (like libx265) in the CUDA cores of those nVidia cards to have the same output quality?


The quality will always degrade with every further compression. If it’s quality you want then stick with the original.

Also, without knowing the exact parameters that were used during the compression of the original will it be nearly impossible to get close to the original quality. You basically will have to apply identical parameters for H.265 as have been used for H.264 and hope it produces the least amount of artefacts in the subsequent compression. But without any knowledge over it will ffmpeg simply decode the H.264 video frame by frame and feed it as new input to the H.265 compression and whatever artefacts the decoder introduces will get carried over into the new compression. It’s not only the H.265 compression that introduces new artefacts, but the H.264 decompression, too, can use different techniques, resulting in different quality playback even before it goes into the H.265 compression.

So you’ll either have to dig deeper or just avoid doing it.


Yes, but when I use soft transcoding by using ffmpeg and the CPU (libx265), the quality is nearly identical as the source x264. I just want to achieve the same quality via hevc_nvenc as I get via the much slower libx265.
But the hardware isn’t yet able to do that. A good thread about this here:

One of their conclusions:
Posted by JohnLai
There is no perfect fixed function encoder . Intel, AMD and Nvidia fixed function encoders omit a lot of ‘features’. Speed/quality tradeoff.
Stick with software encoders for the best quality per bitrate plus flexibility.


Sorry, Gunter. My experience is a different one and I have good success with the hardware encoders. Neither hardware nor software encoder are perfect by the way. Still, if the quality of the hardware encoder differs as much as you say it does then you must be doing something wrong and I cannot tell you what it is. Perhaps ask on the ffmpeg-users mailing list.


Those hardware encoders are still missing some quality features (like B-frame support, max CE size of 32, …) to achieve high quality results. Maybe the next generation of the will add this (each generation, features are added).
Anyway, if you are happy with the results, good for you, but I’m comparing on a freeze frame basis and the difference in quality between soft encoding and the nVidia HW encoder is quite noticable.
And I don’t think it is because I’m using bad/wrong parameters (I might tried them all 😉 ) because if you read the Doom9 thread, it’s clear that the hardware is currently just missing some important features.
But I have quite some CPU power available, which is usually picking it’s nose during the day, which I’ll use for transcoding until some better hardware becomes available.


Maybe don’t watch movies frame by frame, because if that’s the only way you can spot differences then you’re proving my point. Good Luck!




@Sven: After the additional 6 months, did you have any additional insight or change to your recommendations?

Those were much appreciated, thanks.