While I wouldn't recommend 8GB even to my grandfather, it's still interesting that there is something real and tangible to this whole story, as evidenced above. Memory compression goes a long way, and having unified memory can really help out apps like this that can be so GPU intensive doing AI stuff. There might not be release activity every week on every feature, but please do rest assured that we are working on our priorities as stated in the roadmap!Īs always, we’re looking forward to hearing your feedback.So it seems like there is some real world evidence for this argument, although I feel like 12GB is probably closer to the real answer, and I thought as much when that story came out. You may have noticed that some features take longer than others, particularly model development and major app structural changes. Other quality of life improvements like trim experience and batch-downloading models.Explore reducing file sizes on some encoders and containers.Explore feasibility of crash recovery and pause/resume functionality.Processing speeds for Nvidia and AMD GPUs.Frame interpolation (FPS conversion and slow-mo) models and experience.We’re also continuing to work on several other areas: Smoother in-app preview and player experience, particularly with scrub performance and before/after sync.While true face restoration for video is currently infeasible, we’re exploring options to improve how natural faces appear when upscaled. Video quality models for upscaling and denoising. ![]() Our two core focus areas remain unchanged this month: You can see a full list of improvements by reading the release notes for Topaz Video AI v3.1.5, v3.1.4, v3.1.2, and v3.1.1. We’ve also fixed various other issues like slow playback on Mac, parameter issues on various models, and exporting to network drives. We recognize the impact faster processing times have on your workflow, so we’ll continue to improve performance for various backends in future releases. Here is an example from the same frame from a tricky video in v3.1.1 vs v3.1.5: We now handle many of these edge cases, so you should see more consistent results with Stabilization on longer videos. Previously, the Stabilization model occasionally created completely blurred frames for longer videos with scene changes or heavy forward/backwards motion. Stabilization handles longer videos with scene changes We’ll keep improving the sharpness of interpolated frames, and have several early candidate models that show promising results. While the difference can be subtle, here’s a screenshot from a clip comparing the updated Apollo in v3.1.5 with v3.1.1: The Apollo model also now creates sharper interpolated frames, particularly on footage with lots of motion. This change makes it more convenient to get smooth results from videos with duplicate frames. Previously, you would have to remove these duplicate frames from outside the app. ![]() ![]() You can see the difference between running Chronos Fast on footage in v3.1.1 vs v3.1.5 with Replace Duplicate Frames enabled: Topaz Video AI can now detect and replace duplicate frames before running any frame interpolation model.ĭuplicate frames confuse the AI model and often lead to choppy output, so you can now instruct Topaz Video AI to replace them with model output. Frame interpolation can replace duplicate frames We’ve made several improvements to Topaz Video AI since the January roadmap update, primarily focused on frame interpolation (FPS conversion / slow-mo) and Stabilization.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |