![]() Also I disagree that it defeats the purpose of color management. Therefore you can go into the native camera color space. ![]() I haven't tried this myself, I think it was Marc that mentioned it in some earlier discussion.Ī CST sandwich practically the same thing as setting the node color space, just more complicated but more flexible. With regard to using a splitter-combiner structure for NR, I suppose you are correct unless you wanted to apply NR to more than one color channel with different settings. ![]() I wonder if there is any advantage to using NR on a node set to a different color space or gamma? Interesting. Creating a CST sandwich to fake it into output space would defeat the purpose of using RCM, but I get your point. When using RCM you can't really place NR in the camera space or output space, you would be restricted to putting it into the timeline space. Sven H wrote.in RCM you can set up the timeline to whatever you want.Īlso, you can create a CST sandwich (or even simpler: change the node color space) if you want to denoise in another color space. I will continue to do my NR in DWG as I find no evidence it is 'bad' in any of the 3 options. I believe the cause to be that the NR is just 'stronger' in DWG space and weakest in Rec709. With the too strong NR, in Rec709 space it was best In two of the tests, the best was result was in DWG space. Results are subtly different depending on what color space the NR is performed in. I used the following settings for the NR node, with 35% being changed for 15% and 100% for a total of 9 data pointsĪnd reminder, Original vs With Noise 35.9Īs expected, with very strong NR (100%) the image is worse than reasonable settings. The third node is the NR and would be tested Two CSTs, one Clog to DWG and a second DWG to 709. I then built a simple node tree in resolve. The closer the NR method could get this too 100, the closer VMAF thinks the 'corrected' footage matches the original.įfmpeg -i distorted.mov -i Reference.mov -lavfi libvmaf=n_threads=12:model_path="./vmaf_4k_v0.6.1.json" -f null. When running VMAF of the noisy (converted to 709) vs original (converted to 709), score was 35.9. Tried to do a moderate amount of noise, not something you would say is un-usable. Exported at DNxHR 444 10bit (same codec throughout). I then applied Grain effect in Fusion over this for our 'Noisy' sample. So I took a real 15s clip from my R5 shot at base ISO (800) in Clog3. Shooting a test chart multiple at different ISO would be boring and not sure VMAF would consider them identical. To use it I need to generate a 'Original' and a 'distorted' clip that are completely identical other than noise. You feed it the original file and the 'distorted' one and it calculates a quality metric between the two, 100 being perfect, 0 being horrible. ![]() VMAF was developed by Netflix as improvement to PSNR/SIMM video quality metric. TLDR: Inconclusive, but I spent 3 hrs doing the tests, so I would at least post my results. Question: Where should we do NR? In original camera color space (first), in DWG (mid), in Rec709 (end)? ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |