VideoHelp Forum
+ Reply to Thread
Results 1 to 9 of 9
Thread
  1. Hello
    I'm going to build a new PC for encoding anime mostly
    I'm confuse between I7 8700k and Ryzen 7 1800x.

    which one is the best for staxrip software encode?
    I had use GPU GTX 1050 for encode but quality is bad, the software encode is so much better at the same size.

    right now I have I7 3770, it is so slow and use almost 100% cpu usage


    PLEASE HELP ADVICE ME.

    P.S. sorry for my bad English
    Quote Quote  
  2. Member
    Join Date
    Dec 2012
    Location
    Germany
    Search Comp PM
    the i7 has AVX2 with 256bit instead of 128bit (Ryzen) which makes encoding faster
    AVX2 is the Reason why my Athlon 845 (40€ CPU) can keep up with my Xeon 1230v2 (i7 3770 without GPU and -100MHz)

    https://www.anandtech.com/show/11859/the-anandtech-coffee-lake-review-8700k-and-8400-i...ial-numbers/10
    scroll down for the x265 resp. HEVC Benchmark
    Last edited by Zero-11; 17th Mar 2018 at 13:16.
    Quote Quote  
  3. I don't see how it's even a decision to consider, the 8700K, with it's superior IPC and AVX2 as well as the much higher clock speeds walks all over the 1800X:

    https://us.hardware.info/reviews/7602/8/intel-core-i7-8700k--i5-8600k--i5-8400-coffee-...-x265-and-flac

    In x265, the 8700K is even capable of beating a Threadripper 1920X!

    I have been saying this for a while, AMD's "Ryzen" cpu's are fool's gold, snake oil, a scam, they look good on paper because of the higher core counts and lower prices relative to Intel's offerings but in reality they are not all that great.

    Like a sucker I bought a 1600X when Microcenter had them for $170 plus $30 any supporting motherboard but I regret not getting a Coffee Lake (because the motherboards were/are so expensive) or just buying a Kaby Lake, like I had originally planned on doing.
    Quote Quote  
  4. Originally Posted by Zero-11 View Post
    the i7 has AVX2 with 256bit instead of 128bit (Ryzen) which makes encoding faster
    AVX2 is the Reason why my Athlon 845 (40€ CPU) can keep up with my Xeon 1230v2 (i7 3770 without GPU and -100MHz)

    https://www.anandtech.com/show/11859/the-anandtech-coffee-lake-review-8700k-and-8400-i...ial-numbers/10
    scroll down for the x265 resp. HEVC Benchmark
    thank you so much^_^
    Quote Quote  
  5. Originally Posted by sophisticles View Post
    I don't see how it's even a decision to consider, the 8700K, with it's superior IPC and AVX2 as well as the much higher clock speeds walks all over the 1800X:

    https://us.hardware.info/reviews/7602/8/intel-core-i7-8700k--i5-8600k--i5-8400-coffee-...-x265-and-flac

    In x265, the 8700K is even capable of beating a Threadripper 1920X!

    I have been saying this for a while, AMD's "Ryzen" cpu's are fool's gold, snake oil, a scam, they look good on paper because of the higher core counts and lower prices relative to Intel's offerings but in reality they are not all that great.

    Like a sucker I bought a 1600X when Microcenter had them for $170 plus $30 any supporting motherboard but I regret not getting a Coffee Lake (because the motherboards were/are so expensive) or just buying a Kaby Lake, like I had originally planned on doing.
    I thought I7 for gaming and Ryzen for working. Because it have much core.^_^
    Quote Quote  
  6. Originally Posted by nattapol View Post
    I thought I7 for gaming and Ryzen for working. Because it have much core.^_^
    This is one of those ridiculous things that keeps getting repeated over and over again. CPU's with high core counts, both AMD's and Intel's, are good if you need to do multiple task simultaneously, way more than any regular desktop user will ever normally do. For instance, when you're running a server and you need to handle numerous users accessing data at the same time.

    AMD did a great job of marketing Ryzen and getting people excited with their 3d rendering, video encoding and compiling benchmarks but people ignore a number of truths:

    1) With 3d rendering no one does pure cpu based rendering anymore, they use gpu's, which really equalizes the performance differences between cpu's.

    2) AMD's encoding benchmarks focused primarily on Handbrake, HB is nowhere near a professional piece of software and it does not support filtering; if you look at benchmarks that also include filtering, the type of workload most people use when encoding content, you will find that filtering tends to scale poorly with cpu core count and is best implemented via gpu, meaning again, the performance impact of the cpu is minimized.

    Furthermore, in the types of workloads that high core counts help, because of the nature of said workloads and how long they take, no one actually sits in front of the computer and waits for it to finish, they start the job and either walk away or put in the background and keep working. If an encode is going to take 45 minutes with one cpu and 65 minutes with another in practice you will never notice the difference because you are not sitting in front of the computer watching the frames get encoded, you're busy doing other things.
    Quote Quote  
  7. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Originally Posted by sophisticles View Post
    Originally Posted by nattapol View Post
    I thought I7 for gaming and Ryzen for working. Because it have much core.^_^
    This is one of those ridiculous things that keeps getting repeated over and over again. CPU's with high core counts, both AMD's and Intel's, are good if you need to do multiple task simultaneously, way more than any regular desktop user will ever normally do. For instance, when you're running a server and you need to handle numerous users accessing data at the same time.

    AMD did a great job of marketing Ryzen and getting people excited with their 3d rendering, video encoding and compiling benchmarks but people ignore a number of truths:

    1) With 3d rendering no one does pure cpu based rendering anymore, they use gpu's, which really equalizes the performance differences between cpu's.

    2) AMD's encoding benchmarks focused primarily on Handbrake, HB is nowhere near a professional piece of software and it does not support filtering; if you look at benchmarks that also include filtering, the type of workload most people use when encoding content, you will find that filtering tends to scale poorly with cpu core count and is best implemented via gpu, meaning again, the performance impact of the cpu is minimized.

    Furthermore, in the types of workloads that high core counts help, because of the nature of said workloads and how long they take, no one actually sits in front of the computer and waits for it to finish, they start the job and either walk away or put in the background and keep working. If an encode is going to take 45 minutes with one cpu and 65 minutes with another in practice you will never notice the difference because you are not sitting in front of the computer watching the frames get encoded, you're busy doing other things.
    @sophisticles
    I'd really appreciate it (and others would too) if you supported your claims with ACTUAL data (ie. test results, comparisons, sample clips, settings, software, etc.)
    I've seen many "claims" that you make in different posts; I've yet to see one that actually has data to back it up.

    Your claims about AMD or Intel CPU's without any proof is nothing more than one persons opinion and, in essence, no different that the marketing claims AMD did for Ryzen.

    Do some actual testing on some publicly available samples, provide your results, settings, software, system specs, etc. everything needed to reproduce the tests on other machines, so people can run them themselves, and allow each to draw their own conclusions.

    I, for one, would love to be able to compare my Intel Core i7 4930K to what the newer AMD and Intel CPU offerings of late.
    Quote Quote  
  8. Member
    Join Date
    Aug 2008
    Location
    Australia
    Search Comp PM
    Get the 8700k it will beat the ryzan, but only if you have the intel graphics turned on, which is a Bios setting(this setting discription is called different things by manafacturers and even motherboards).I don't know about staxrip but in Handbrake alot of the choices wont show up without the intel graphics turned on.They are turned off by default.Just check in device manager,under Display Adaptors you will have 2 there if you have a discrete graphics as well.Intel and nvidea or amd.Only one if they are turned off.
    Last edited by isapc; 10th Jun 2018 at 04:05.
    Quote Quote  
  9. Originally Posted by sophisticles View Post
    AMD did a great job of marketing Ryzen and getting people excited with their 3d rendering, video encoding and compiling benchmarks but people ignore a number of truths:

    1) With 3d rendering no one does pure cpu based rendering anymore, they use gpu's, which really equalizes the performance differences between cpu's.
    There is no GPU on market (commercial) that support common 3D realistic rendering algorithms like ray tracing and radiosity etc. Some GPU acceleration API's exist (for raytracing) however they are on early stage of development. Real realistic 3D rendering is done on CPU's and due nature of this task well written code should give processing gain almost linearly scaling with number of cores...

    https://developer.nvidia.com/optix
    https://arstechnica.com/gaming/2018/03/star-wars-demo-shows-off-just-how-great-real-ti...cing-can-look/

    I bet that for 6000USD you can build decent home PC capable to perform acceptably good with various tasks.


    Originally Posted by sophisticles View Post

    2) AMD's encoding benchmarks focused primarily on Handbrake, HB is nowhere near a professional piece of software and it does not support filtering; if you look at benchmarks that also include filtering, the type of workload most people use when encoding content, you will find that filtering tends to scale poorly with cpu core count and is best implemented via gpu, meaning again, the performance impact of the cpu is minimized.
    Not sure about what kind filtering you are referring too. Please be more specific - for sure filtering based where GPU need to perform conditional branching frequently will be slower on GPU than on CPU (GPU is more sensitive to context switching and efficient GPU processing require local, non branched algorithm with constant data flow - here and now - not too much decision making).
    If filtering doesn't scale with CPU cores then it will be no beneficial from GPU at all as GPU cores are less flexible than CPU - if you can't efficiently deal with 32 cores why suddenly you will be able to deal with 256 cores?

    Originally Posted by sophisticles View Post
    Furthermore, in the types of workloads that high core counts help, because of the nature of said workloads and how long they take, no one actually sits in front of the computer and waits for it to finish, they start the job and either walk away or put in the background and keep working. If an encode is going to take 45 minutes with one cpu and 65 minutes with another in practice you will never notice the difference because you are not sitting in front of the computer watching the frames get encoded, you're busy doing other things.
    True but for home usage CPU is used for very wide variety of tasks - and as opposite for professional and specialized use you can always easily outperform GPU's by using ASIC and/or FPGA technology...


    And i strongly advise to be very caution with all benchmarks comparing Intel vs AMD without knowledge about how compiled code was - it is well known that Intel compilers frequently used to produce best speed code on intensive tasks are biased on Intel side and they may produce less efficient code on AMD HW. I rather rely on less optimized processing like those ones https://www.phoronix.com/scan.php?page=article&item=ryzen-1800x-linux&num=5 - yes for today with Handbrake Intel may outperform AMD but tomorrow we can see opposite situation - not sure how well x265 is tuned to Intel architecture - there is plenty of hand tunes assembly code there and in future someone may publish patch with code better optimized for AMD. (privately i sue Intel in all my home computers so i'm not biased on AMD).

    Also situation may be completely different if latest security patches are implemented on system - it is well known that AMD seem to be less-perfromance affected than Intel (but performance loss seem to be unavoidable anyway) https://www.phoronix.com/scan.php?page=article&item=spectre4-amd-initial&num=2

    I mean don't trust benchmarks older than 1 month... unless you have no intention to install patches for HW vulnerabilities.
    Last edited by pandy; 10th Jun 2018 at 04:55.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!