Himechi | 43 points
For newcomers to encoding, the most frustrating things to be told about choosing settings are "it depends" and that "one setting takes longer than another but looks better". When you have a wide range of settings to choose from every choice seems like a bad one. So I thought I'd do a quick demonstration to help shed at least a little light.
For this demo I used an AMD FX-8320 8-core cpu at 3.5GHz. Mobo was a Gigabyte 990FXA-UD5. Not top-of-the-line but far from a piece of junk. The movie was encoded straight off the bluray disc instead of from an HDD or SSD, which may have extended the time it took somewhat. I did not use gpu encoding.
I recoded Into The Blue. 1h50 long, AC3 passthru for the audio and I kept one subtitle track. RF was set to 22. The only difference between the jobs was the Encoder Preset under the Video tab.
The first job was set to Fast and took 6 hours. File size 3.27GB.
The second job was set to Slow and took 24 hours. File size is 2.69GB.
MediaInfo for both files. Note that the Slow job reports as a Variable framerate. It was set to Constant and I don't know why it's being reported differently. (u/GoslingIchi has explained this below.) No biggie.
So there it is. Is maxxing my cpu for 18 hours worth saving 700MB? I don't believe so. Your mileage may vary. Pardon me, I mean "it depends." ;) If someone out there would consider doing a similar project for gpu encoding and posting the results I think that would be very helpful.
[edit] u/dingdaggity messaged me to point out that the file names in the picture were wrong. They've been corrected with a new image.
[-] GoslingIchi | 4 points
Constant versus Variable frame rate is a well known issue with MediaInfo and Handbrake. Handbrake uses a constant clock, but it's not a standard clock since they support many more of a mix in audio and video frame rates than just about anyone else.
Also, the file name says fast, but you labeled it as slow, and the other file name says slow, but you labeled it as fast.
Finally, encoding logs are usually more helpful than MediaInfo reports.
Thank you for the Variable explanation. I've fixed the image labels. And I figured that for newbs being able to see file sizes and bitrates would be of the most interest. The rest of the MediaInfo content is meant to confirm (some of) my settings.
[-] GoslingIchi | 2 points
I'm famous!
Here's a better link to describe what is going on with MediaInfo and Handbrake - https://forum.handbrake.fr/viewtopic.php?p=159947#p159947
Rodeo has posted a much greater explanation a few years ago, but that post seems to sum it up.
[-] neckbeardgamers | 4 points
There is a big time discrepancy 6 hrs versus 24, but if a hundred download the file or even thousands, arguably it saves much more time and disk space for the encoder to waste more cpu cycles and time, rather than the much larger sample size of downloaders.
[-] tiiiiimmmm | 6 points
I agree, 700 MB is nothing to scoff at
If I was King of the Internet and watching every byte of traffic, sure. If I'm Joe Public on a low income who wants to use his resources to share certain files that he loves, I'd probably be okay with making other people wait the extra few minutes to get their download. 18 hours is a long time and a lot of electricity.
[-] tiiiiimmmm | 7 points
Has more to do with storage than bandwidth
[-] FlimtotheFlam | 3 points
I use Staxrip and GTX 960 to do HEVC encodes. It is so much faster than any other way to encode videos.
Okay, but what does 'so much faster' mean when it's apples-to-apples? Would you do a gpu encode with a similar-length movie using the same settings and post the results?
[-] FlimtotheFlam | 3 points
Here is a uncompressed Blu-Ray snapshot of the movie 2 Guns and here is the HEVC H.265 8 bit Encoded version with VBR 5000 bit rate.
I encoded the DTS down to AC3 448 bit. Went from 27 GB to 4.15 GB and was finished in 30 mins.
Here is the media info for the two files. BluRay vs HEVC
Wow.
[edit] I was expecting that a beefy card would be needed to get nice, low times with this. Something like starting with a 1060. But if a 970 is doing this work in 30 minutes? Sign me up.
[-] FlimtotheFlam | 3 points
I think the 1060 and up can do 10 bit but I am not entirely sure.
with an 8bit colour source is there a reason one would would encode into 10bit?
https://gist.github.com/l4n9th4n9/4459997
This brings us to the real advantage of higher bit depths: We can save bandwidth even if the source only uses 8 bits per channel.
That’s right: Not only do we no longer need to hardcode any dithering, but higher bit depth also means higher error tolerance. Losing one bit of information in an 8-bit color space is equivalent to losing three bits in a 10-bit color space, and thus the same quality can be achieved with less bitrate. Want an example? One of my first tests was encoding episode 13 of Shakugan no Shana from a DVD source, with dithering added to prevent banding. I used the exact same input and settings for both encodes.
The video track of the 8-bit encode has 275 MiB, while the 10-bit encode has no more than 152 MiB and doesn’t look worse at all -- in fact, it even looks better than the much larger 8-bit encode.
Now, if I hadn’t hardcoded the dithering for the 10-bit encode and instead passed a high-bit-depth picture to x264, it would’ve resulted in even better perceived quality and an even smaller file size!
http://x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf
So why does a AVC/H.264 10-bit encoder perform better than 8-bit? When encoding with the 10-bit tool, the compression process is performed with at least 10-bit accuracy compared to only 8-bit otherwise. So there is less truncation errors, especially in the motion compensation stage, increasing the efficiency of compression tools. As a consequence, there is less need to quantize to achieve a given bit-rate. The net result is a better quality for the same bit-rate or conversely less bit-rate for the same quality: between 5% and 20% on typical sources.
[-] douchebanner | 4 points
You can also setup a render farm essentially to breakup the encoding work amongst many systems, get it done faster instead of having PC maxed out. Only really works if you have access to multiple PC's on the same network.
I wonder how newer CPUs like Ryzen and Threadripper perform.
I found this
http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700/20
It's really interesting, for HVEC the 1700X was 2 frames slower per second than the 6900K (mind you there's a 690$ difference between the two cpus!)
Like this: http://images.anandtech.com/graphs/graph11170/85887.png
OP has an 8320 (like me) which is on par with an i5 or low i7. So, say his first encode took 8 hours (should have used 15-18 instead of 22, and ultrafast, but hey... whatever).
Example math, not actual:
8 = 970fps = 27,936,000 frames Ryzen: 1345 fps = 4,842,000 fph | 5.77 hours or 28% faster
Neat. Wonder how much better Threadripper would be. Also does handbrake scale with the cores/threads.
I was just about to post the threadripper (http://images.anandtech.com/graphs/graph11697/90034.png) results; but they did it in 4K so I had to do more math. Basically, @ 4K OP and I are looking at roughly 5FPS (from personal experience); so:
27,936,000 / 5 = 5,587,200s = 1,552 hours
Ryzen 7: 24 FPS = 323 hours Ryzen TR: 41.3 FPS = 187 hours
So, Ryzen is 80% more efficient than mine and OPs, and Threadripper is ~44% more efficient than Ryzen 7.
Edit: you do realize that "Ryzen" and "Threadripper" are the same thing, right? Just models. Intel's new stuff are i9 Skylake; and they outperform all the new AMD stuff.
That's pretty decent. Yes I know the difference. AMD did a great job with Threadripper. More cores/threads for your $, ecc support, more pci lanes. It's more of a productivity CPU but still no slouch in gaming. All that will get better with more optimization and software taking advantage of those cores/threads. Also frankly, I'm sick of Intel not having competition to light a fire under their ass. They're finally responding with i7-8700k, which will lead to those software optimizations taking advantage of more cores.
Is there any difference between using CPU vs GPU for encoding?
[-] tiiiiimmmm | 4 points
From another post, CPU=software encoding GPU=hardware encoding ( https://redd.it/6xee7f ):
That's the difference between hardware and software based encoding. Without getting too technical, the newer hardware has actual instruction blocks written on it which is special made to handle the operations required for x265 encoding/decoding. This makes the newer hardware significantly more efficient and the primary reason for why hardware encoding is faster and why new CPUs with the same frequency (ie how fast/how many ghz) as old CPUs can decode x265 properly and others cannot.
The downside is that while the GPUs have a ton of resources available to throw at encoding and therefore create files fast, the hardware architecture of a GPU is fundamentally different than a CPU and it's really bad a performing the type of math a CPU does. The algorithm that encodes x265 on hardware is actually a bit different than what you see in software based encoding (like Handbrake or FFMPEG) because of this and GPUs just can't process the same probability space as a CPU which is required to find the optimal compression.
Because of this x265 hardware encodes tend to be the size of an equivalent quality software x264 encode but with lightning fast encoding time.
Nice explanation. But what about the playback of the output? Is there any significant difference? Say GPU encoded file, Does it makes the output require more processing power to play?
I've tested encoding a movie or two, the time taken using CPU are way too long to justify the size reduction. But with GPU I'd say I'm pretty impressed.
[-] tiiiiimmmm | 2 points
You won't see a difference in playback performance, it's just your quality to size ratio will be about the same as an x264 CPU encode, where as an x265 CPU encode of the same file will be ~40%-60% smaller at the same quality, with a theoretical limit of 75% compression over x264.
I've tested encoding a movie or two, the time taken using CPU are way too long to justify the size reduction.
Meanwhile, in another thread here some feel that encoders are obliged to do lengthy encodes in order to shave file sizes by 15% (going by the results in my OP). It seems that no matter what choices we make about our work we're disappointing somebody.
[-] [deleted] | 2 points
I made my first 4K rip as test. 17 hours of encoding at slow. RF 21. Constant framerate. Intel Xeon E3-1271V3 and 64GB DDR3 ECC. SSD.. Input is 70GB Remux. insane ...Outuput is 1080p ~6 GB mkv (final version with ATMOS, Lossless TrueHD 7.1 Passthru) . I'm staring at the 42" TV (4" distance) to see the defect. And I do not see it. looks the same like 4K and it is not at all 4K. But x265 10bit in Handbrake, creates the illusion of a much higher precision and color accuracy from 4K than from HDTV source.The best option is to use as the first MKVToolNix. To break the remux source to video and audio separately - .mkv with MPEG and .mka (with Atmos, Subtitles and Metadata). And then encode only video in Handbrake. And finally, only return x265 10bit video into remux (.mkv +.mka) with MKVToolNix. As a.k.a. Tigole does. 😏
[-] [deleted] | 1 points
A LOT has changed in the last year on the HEVC bandwagon. You need to make sure that you've downloaded and installed HandBrake after December, 2016. Up to the latest version 1.0.7 of HandBrake (HandBrake 1.0.7 uses x265 version 2.1. Must be this version.)
https://github.com/HandBrake/HandBrake/releases
However compatibility is going to be a problem for many. Still not many H.265 hardware decoders and CPU decoding (software) is too intensive for most mobile devices.
[-] tiiiiimmmm | 10 points | Sep 01 2017 23:07:17
Use medium instead, it will give you almost the same file size as slow with half the encode time.
permalink
[-] paTEoriginal | 1 points | Sep 02 2017 16:23:01
"it depends"
permalink
[-] tiiiiimmmm | 1 points | Sep 02 2017 16:27:49
No, it doesn't. The difference between medium and slow are extremely minimal and you should only use fast, medium, or very slow. I might be able to find some documents about this but it has to do with the probability space of possible compression paths. If I do find an article about this, it will be something like a Math PHD thesis
permalink