NVENC Question
Moderators: Maggie, ckhouston, JJ, Phil, alexia, Forum admin
NVENC Question
Just out of interest, how are you using the old CUDA method on new nVidia drivers ? Is this conflicting with the new NVENC ? No other converters allow both nVidia methods on new drivers (probably for good reason). Is this why NVENC is a lot slower on yours than other converters ?
Re: NVENC Question
Hello cribber,
"how are you using the old CUDA method on new nVidia drivers ?"
To use old CUDA API (nvcuvenc) on last Nvidia drivers, we just use the nvcuvenc library from last known Nvidia driver including this API.
Yes, we know that this method is not really clean, but it's the only one we have...
We know that in a future Nvidia driver update, this API will be completly disabled, that mean the graphic card will not accept to create an encoder context using this API.
So, at this time we assume that if a graphic card is able/accept to create an encoder context using this API, it will be able to encode like it should.
"Is this why NVENC is a lot slower on yours than other converters"
NVENC API is totally independent from old nvcuvenc and we use the official way to implement it (and we don't have choice...)
Why NVENC is a lot of slower in our program? I don't know, but are you sure that we are using same encoding settings than other programs?
We can tune NVENC encoding to get higher encoding framerate, but we need to check the final video quality too (do you compared video quality/filesize between others and VSO NVENC encoding?)
The hardest part with hardware encoding implementation is to get same video quality when using software OR hardware encoding...
Regards,
"how are you using the old CUDA method on new nVidia drivers ?"
To use old CUDA API (nvcuvenc) on last Nvidia drivers, we just use the nvcuvenc library from last known Nvidia driver including this API.
Yes, we know that this method is not really clean, but it's the only one we have...
We know that in a future Nvidia driver update, this API will be completly disabled, that mean the graphic card will not accept to create an encoder context using this API.
So, at this time we assume that if a graphic card is able/accept to create an encoder context using this API, it will be able to encode like it should.
"Is this why NVENC is a lot slower on yours than other converters"
NVENC API is totally independent from old nvcuvenc and we use the official way to implement it (and we don't have choice...)
Why NVENC is a lot of slower in our program? I don't know, but are you sure that we are using same encoding settings than other programs?
We can tune NVENC encoding to get higher encoding framerate, but we need to check the final video quality too (do you compared video quality/filesize between others and VSO NVENC encoding?)
The hardest part with hardware encoding implementation is to get same video quality when using software OR hardware encoding...
Regards,
Re: NVENC Question
Another question,
Encoding with NVCUVENC and CRF, MediaInfo says it's Variable Bitrate.
Encoding with NVENC and CRF, MediaInfo says it's NOT Variable Bitrate.
What's that all about ?
Encoding with NVCUVENC and CRF, MediaInfo says it's Variable Bitrate.
Encoding with NVENC and CRF, MediaInfo says it's NOT Variable Bitrate.
What's that all about ?
Re: NVENC Question
Bumpcribber wrote:Another question,
Encoding with NVCUVENC and CRF, MediaInfo says it's Variable Bitrate.
Encoding with NVENC and CRF, MediaInfo says it's NOT Variable Bitrate.
What's that all about ?