* Fix various issues with texture sync
A variable called _actionRegistered is used to keep track of whether a tracking action has been registered for a given texture group handle. This variable is set when the action is registered, and should be unset when it is consumed. This is used to skip registering the tracking action if it's already registered, saving some time for render targets that are modified very often.
There were two issues with this. The worst issue was that the tracking action handler exits early if the handle's modified flag is false... which means that it never reset _actionRegistered, as that was done within the Sync() method called later. The second issue was that this variable was set true after the sync action was registered, so it was technically possible for the action to run immediately, set the flag to false, then set it to true.
Both situations would lead to the action never being registered again, as the texture group handle would be sure the action is already registered. This breaks the texture for the remaining runtime, or until it is disposed.
It was also possible for a texture to register sync once, then on future frames the last modified sync number did not update. This may have caused some more minor issues.
Seems to fix the Xenoblade flashing bug. Obviously this needs a lot of testing, since it was random chance. I typically had the most luck getting it to happen by switching time of day on the event theatre screen for a while, then entering the equipment screen by pressing X on an event.
May also fix weird things like random chance air swimming in BOTW, maybe a few texture streaming bugs.
* Exchange rather than CompareExchange
* New shader cache implementation
* Remove some debug code
* Take transform feedback varying count into account
* Create shader cache directory if it does not exist + fragment output map related fixes
* Remove debug code
* Only check texture descriptors if the constant buffer is bound
* Also check CPU VA on GetSpanMapped
* Remove more unused code and move cache related code
* XML docs + remove more unused methods
* Better codegen for TransformFeedbackDescriptor.AsSpan
* Support migration from old cache format, remove more unused code
Shader cache rebuild now also rewrites the shared toc and data files
* Fix migration error with BRX shaders
* Add a limit to the async translation queue
Avoid async translation threads not being able to keep up and the queue growing very large
* Re-create specialization state on recompile
This might be required if a new version of the shader translator requires more or less state, or if there is a bug related to the GPU state access
* Make shader cache more error resilient
* Add some missing XML docs and move GpuAccessor docs to the interface/use inheritdoc
* Address early PR feedback
* Fix rebase
* Remove IRenderer.CompileShader and IShader interface, replace with new ShaderSource struct passed to CreateProgram directly
* Handle some missing exceptions
* Make shader cache purge delete both old and new shader caches
* Register textures on new specialization state
* Translate and compile shaders in forward order (eliminates diffs due to different binding numbers)
* Limit in-flight shader compilation to the maximum number of compilation threads
* Replace ParallelDiskCacheLoader state changed event with a callback function
* Better handling for invalid constant buffer 1 data length
* Do not create the old cache directory structure if the old cache does not exist
* Constant buffer use should be per-stage. This change will invalidate existing new caches (file format version was incremented)
* Replace rectangle texture with just coordinate normalization
* Skip incompatible shaders that are missing texture information, instead of crashing
This is required if we, for example, support new texture instruction to the shader translator, and then they allow access to textures that were not accessed before. In this scenario, the old cache entry is no longer usable
* Fix coordinates normalization on cubemap textures
* Check if title ID is null before combining shader cache path
* More robust constant buffer address validation on spec state
* More robust constant buffer address validation on spec state (2)
* Regenerate shader cache with one stream, rather than one per shader.
* Only create shader cache directory during initialization
* Logging improvements
* Proper shader program disposal
* PR feedback, and add a comment on serialized structs
* XML docs for RegisterTexture
Co-authored-by: riperiperi <rhy3756547@hotmail.com>
* De-tile GOB when DMA copying from block linear to pitch kind memory regions
* XML docs + nits
* Remove using
* No flush for regular buffer copies
* Add back ulong casts, fix regression due to oversight
* Allow textures to have their data partially mapped
* Explicitly check for invalid memory ranges on the MultiRangeList
* Update GetWritableRegion to also support unmapped ranges
* Collapse AsSpan().Slice(..) calls into AsSpan(..)
Less code and a bit faster
* Collapse an Array.Clear(array, 0, array.Length) call to Array.Clear(array)
* Do not allow render targets not explicitly written by the fragment shader to be modified
* Shader cache version bump
* Remove blank lines
* Avoid redundant color mask updates
* HostShaderCacheEntry can be null
* Avoid more redundant glColorMask calls
* nit: Mask -> Masks
* Fix currentComponentMask
* More efficient way to update _currentComponentMasks
* Add timestamp to 16-byte semaphore releases.
BOTW was reading a ulong 8 bytes after a semaphore return. Turns out this is the timestamp it was trying to do performance calculation with, so I've made it write when necessary.
This mode was also added to the DMA semaphore I added recently, as it is required by a few games. (i think quake?)
The timestamp code has been moved to GPU context. Check other games with an unusually low framerate cap or dynamic resolution to see if they have improved.
* Cast dma semaphore payload to ulong to fill the space
* Write timestamp first
Might be just worrying too much, but we don't want the applcation reading timestamp if it sees the payload before timestamp is written.
This fixes an issue where the render scale array would not be updated when technically the scales on the flat array were the same, but the start index for the vertex scales was different.
* Add support for BC1/2/3 decompression (for 3D textures)
* Optimize and clean up
* Unsafe not needed here
* Fix alpha value interpolation when a0 <= a1
* Stop using glTransformFeedbackVarying and use explicit layout on the shader
* This is no longer needed
* Shader cache version bump
* Fix gl_PerVertex output for tessellation control shaders
This fixes some regressions caused by #2971 which caused rendered 3D texture data to be lost for most slices. Fixes issues with Xenoblade 2's colour grading, probably a ton of other games.
This also removes the check from TextureCache, making it the tiniest bit smaller (any win is a win here).
* Implement IMUL shader instruction
* Implement PCNT/CONT instruction and fix FFMA32I
* Add HFMA232I to the table
* Shader cache version bump
* No Rc on Ffma32i
* Initial test for texture sync
* WIP new texture flushing setup
* Improve rules for incompatible overlaps
Fixes a lot of issues with Unreal Engine games. Still a few minor issues (some caused by dma fast path?) Needs docs and cleanup.
* Cleanup, improvements
Improve rules for fast DMA
* Small tweak to group together flushes of overlapping handles.
* Fixes, flush overlapping texture data for ASTC and BC4/5 compressed textures.
Fixes the new Life is Strange game.
* Flush overlaps before init data, fix 3d texture size/overlap stuff
* Fix 3D Textures, faster single layer flush
Note: nosy people can no longer merge this with Vulkan. (unless they are nosy enough to implement the new backend methods)
* Remove unused method
* Minor cleanup
* More cleanup
* Use the More Fun and Hopefully No Driver Bugs method for getting compressed tex too
This one's for metro
* Address feedback, ASTC+ETC to FormatClass
* Change offset to use Span slice rather than IntPtr Add
* Fix this too
* Add support for render scale to vertex stage.
Occasionally games read off textureSize on the vertex stage to inform the fragment shader what size a texture is without querying in there. Scales were not present in the vertex shader to correct the sizes, so games were providing the raw upscaled texture size to the fragment shader, which was incorrect.
One downside is that the fragment and vertex support buffer description must be identical, so the full size scales array must be defined when used. I don't think this will have an impact though. Another is that the fragment texture count must be updated when vertex shader textures are used. I'd like to correct this so that the update is folded into the update for the scales.
Also cleans up a bunch of things, like it making no sense to call CommitRenderScale for each stage.
Fixes render scale causing a weird offset bloom in Super Mario Party and Clubhouse Games. Clubhouse Games still has a pixelated look in a number of its games due to something else it does in the shader.
* Split out support buffer update, lazy updates.
* Commit support buffer before compute dispatch
* Remove unnecessary qualifier.
* Address Feedback
* Flip scissor box when the YNegate bit is set
* Flip scissor based on screen scissor state, account for negative scissor Y
* No need for abs when we already know the value is negative
Rather than calculating this for every sampler, this PR calculates if a texture can force anisotropy when its info is set, and exposes the value via a public boolean.
This should help texture/sampler heavy games when anisotropic filtering is not Auto, like UE4 ones (or so i hear?). There is another cost where samplers are created twice when anisotropic filtering is enabled, but I'm not sure how relevant this one is.
* infra: Migrate to .NET 6
* Rollback version naming change
* Workaround .NET 6 ZipArchive API issues
* ci: Switch to VS 2022 for AppVeyor
CI is now ready for .NET 6
* Suppress WebClient warning in DoUpdateWithMultipleThreads
* Attempt to workaround System.Drawing.Common changes on 6.0.0
* Change keyboard rendering from System.Drawing to ImageSharp
* Make the software keyboard renderer multithreaded
* Bump ImageSharp version to 1.0.4 to fix a bug in Image.Load
* Add fallback fonts to the keyboard renderer
* Fix warnings
* Address caian's comment
* Clean up linux workaround as it's uneeded now
* Update readme
Co-authored-by: Caian Benedicto <caianbene@gmail.com>
* Limit Custom Anisotropic Filtering to only fully mipmapped textures
There's a major flaw with the anisotropic filtering setting that causes @GamerzHell9137 to report graphical bugs that otherwise wouldn't be there, because he just won't set it to Auto. This should fix those issues, hopefully.
These bugs are generally because anisotropic filtering is enabled on something that it shouldn't be, such as a post process filter or some data texture. This PR maintains two host samplers when custom AF is enabled, and only uses the forced AF one when the texture is 2d and fully mipmapped (goes down to 1x1). This is because game textures are the ideal target for this filtering, and they are typically fully mipmapped, unlike things like screen render targets which usually have 1 or just a few levels.
This also only enables AF on mipmapped samplers where the filtering is bilinear or trilinear. This should be self explanatory.
This PR also allows the changing of Anisotropic Filtering at runtime, and you can immediately see the changes. All samplers are flushed from the cache if the setting changes, causing them to be recreated with the new custom AF value. This brings it in line with our resolution scale. 😌
* Expected minimum mip count for large textures rather than all, address feedback
* Use Target rather than Info.Target
* Retrigger build?
* Fix rebase
* Implement DrawTexture functionality
* Non-NVIDIA support
* Disable some features that should not affect draw texture (slow path)
* Remove space from shader source
* Match 2D engine names
* Fix resolution scale and add missing XML docs
* Disable transform feedback for draw texture fallback
* Support shader gl_Color, gl_SecondaryColor and gl_TexCoord built-ins
* Shader cache version bump
* Fix back color value on fragment shader
* Disable IPA multiplication for fixed function attributes and back color selection
* Support coherent images
* Add support for fragment shader interlock
* Change to tree based match approach
* Refactor + check for branch targets and external registers
* Make detection more robust
* Use Intel fragment shader ordering if interlock is not available, use nothing if both are not available
* Remove unused field
* Fix race when EventWait is called and a wait is done on the CPU
* This is useless now
* Fix EventSignal
* Ensure the signal belongs to the current fence, to avoid stale signals
* Another workaround for NVIDIA driver 496.13 shader bug
This might work better than the other one. Give this a test to see if it fixes/doesn't fix issues with the other one.
The problem seems to be when any variable assignment happens with a negation. `temp_1 = -temp_0;` seems to trigger weird behaviour, but `temp_1 = 0.0 - temp_0;` does not. This also might to extend towards integer types?
* Update cache version
* Add disclaimer comment
* Wording
Some games (GameMaker Studio) build texture atlases out of sprites during initialization, using the 2D copy method. These copies are done from textures loaded into memory, not rendered, so they are not scaled to begin with.
I had set srcTexture in these copies to force scaling, but really it only needs to scale if the texture already exists and was scaled by rendering or something else. I just set that to false, so it doesn't change if the texture is scaled or not. This will also avoid the destination being scaled if the source wasn't. The copy can handle mismatching scales just fine.
This prevents scaling artifacts in GMS games, and maybe others (not Super Mario Maker 2, that has another issue).
Fixes a regression from #2663 where buffer flush would not happen after a resize. Specifically caused the world map in Yoshi's Crafted World to flash.
I have other planned changes to this class so this might change soon, but this regression could affect a lot so it couldn't wait.
This fixes a potential regression with the new range list changes, where the cost for creating new ones would be rather large due to creating a 1024 size array. Also reduces cost for range list inheritance by using the first existing range list as a base, rather than creating a new one then adding both lists to it.
The growth size for the RangeList is now identical to its initial size. Every 32 elements was probably a little too common - now it is 1024 for most things and 8 for the buffer modified range list.
The Unmapped and SyncMethod methods have been changed to ensure that they behave properly if the range list is set null. Cleaned up a few calls to use the null-conditional operator.
* Replace CacheResourceWrite with more general "precise" write
The goal of CacheResourceWrite was to notify GPU resources when they were modified directly, by looking up the modified address/size in a structure and calling a method on each resource. The downside of this is that each resource cache has to be queried individually, they all have to implement their own way to do this, and it can only signal to resources using the same PhysicalMemory instance.
This PR adds the ability to signal a write as "precise" on the tracking, which signals a special handler (if present) which can be used to avoid unnecessary flush actions, or maybe even more. For buffers, precise writes specifically do not flush, and instead punch a hole in the modified range list to indicate that the data on GPU has been replaced.
The downside is that precise actions must ignore the page protection bits and always signal - as they need to notify the target resource to ignore the sequence number optimization.
I had to reintroduce the sequence number increment after I2M, as removing it was causing issues in rabbids kingdom battle. However - all resources modified by I2M are notified directly to lower their sequence number, so the problem is likely that another unrelated resource is not being properly updated. Thankfully, doing this does not affect performance in the games I tested.
This should fix regressions from #2624. Test any games that were broken by that. (RF4, rabbids kingdom battle)
I've also added a sequence number increment to ThreedClass.IncrementSyncpoint, as it seems to fix buffer corruption in OpenGL homebrew. (this was a regression from removing sequence number increment from constant buffer update - another unrelated resource thing)
* Add tests.
* Add XML docs for GpuRegionHandle
* Skip UpdateProtection if only precise actions were called
This allows precise actions to skip reprotection costs.
When a texture is deleted by falling to the bottom of the AutoDeleteCache, its data is flushed to preserve any GPU writes that occurred. This ensures that the data appears in any textures recreated in the future, but didn't account for a texture that already existed with a copy dependency.
This change forces copy dependencies to complete if a texture falls out from from the AutoDeleteCache. (not removed via overlap, as that would be wasted effort)
Fixes broken lighting caused by pausing in SMO's Metro Kingdom. May fix some other issues.
* Fast path for Inline2Memory buffer write
This PR adds a method to PhysicalMemory that attempts to write all cached resources directly, so that memory tracking can be avoided. The goal of this is both to avoid flushing buffer data, and to avoid raising the sequence number when data is written, which causes buffer and texture handles to be re-checked.
This currently only targets buffers, with a side check on textures that falls back to a tracked write if any exist within the target range. It's not expected to write textures from here - this is just a mechanism to protect us if someone does decide to do that. It's possible to add a fast path for this in future (and for ShaderCache, once that starts using tracking)
The forced read before inline2memory begins has been skipped, as the data is fully written when the transfer is completed anyways. This allows us to flush on read in emergency situations, but still write the new data over the flushed data.
Improves performance on Xenoblade 2 and DE, which was flushing buffer data on the GPU thread when trying to write compute data. May improve performance in other games that write SSBOs from compute, and update data in the same/nearby pages often.
Super Smash Bros Ultimate should probably be tested to make sure the vertex explosions haven't returned, as I think that's what this AdvanceSequence was for.
* ForceDirty before write, to make sure data does not flush over the new write
* Array based RangeList that caches Address/EndAddress
In isolation, this was more than 2x faster than the RangeList that checks using the interface. In practice I'm seeing much better results than I expected. The array is used because checking it is slightly faster than using a list, which loses time to struct copies, but I still want that data locality.
A method has been added to the list to update the cached end address, as some users of the RangeList currently modify it dynamically.
Greatly improves performance in Super Mario Odyssey, Xenoblade and any other GPU limited games.
* Address Feedback
* Lift textures in the AutoDeleteCache for all modifications.
Before, this would only apply to render targets and texture blit. Now it applies to image stores, the fast dma copy path and any other type of modification.
Image store always at least has one reference in the texture pool, so the function of the AutoDeleteCache keeping textures _alive_ is not useful, but a very important function for a while has been its use to flush textures in order of modification when they are dereferenced, so that their data is not lost.
Before, textures populated using image stores were being dereferenced and reloaded as garbage. Now, when these textures are dereferenced, their data will be put back into memory, and everything stays intact.
Fixes lighting breaking when switching levels in THPS1+2, and potentially some more UE4 games. I've tested a bunch more games for regressions and performance impact, but they all seem fine.
* Lift copy srcTexture so that it doesn't remain referenceless
* Perform lift before reference count change on unbind.
It's important to lift on unbind as that is the moment the texture was truly last modified, but definitely not after releasing every single reference.
* Fix TXQ for 3D textures.
Assumes the texture is 3D if the component mask contains Z.
This fixes a bug in UE4 games where parts of the map had garbage pointers to lighting voxels, as the lookup 3D texture was not being initialized. Most notable game is THPS1+2.
May need another PR to keep image store data alive and properly flush it in order using the AutoDeleteCache.
* Get sampler type for TextureSize from bound textures.
* Initial Implementation
* Further improvements (no support for float/64-bit types)
* Merge atomic and reduce instructions, add missing format switch
* Fix rebase issues.
* Not used.
* Whoops. Fixed.
* Partial implementation of inc/dec, cleanup and TODOs
* Remove testing path
* Address Feedback
* Avoid deleting textures when their data does not overlap.
It's possible that while two textures start and end addresses indicate an overlap, that the actual data contained within them is sparse due to a layer stride. One such possibility is array slices of a cubemap at different mip levels - they overlap on a whole, but the actual texture data fills the gaps between each other's layers rather than actually overlapping.
This fixes issues with UE4 games having incorrect lighting (solid white screen or really dark shadows). There are still remaining issues with games that use the 3D texture prebaked lighting, such as THPS1+2.
This PR also fixes a bug with TexturePool's resized texture handling where the base level in the descriptor was not considered.
* AllRegions granularity for 3d textures is now by level rather than by slice.
* Address feedback
* Only reupload the texture scale array if it changes.
Before, this would be called all the time if any shader needed a scale value. The cost of doing this has increased with threaded-gal, as the scale array is copied to a span pool, and it's was called on pretty much every draw sometimes.
This improves GPU performance in games, scaled or not. Most affected game seems to be Xenoblade Chronicles: Definitive Edition.
* Just use = instead of |=
* Initial Implementation
About as fast as nvidia GL multithreading, can be improved with faster command queuing.
* Struct based command list
Speeds up a bit. Still a lot of time lost to resource copy.
* Do shader init while the render thread is active.
* Introduce circular span pool V1
Ideally should be able to use structs instead of references for storing these spans on commands. Will try that next.
* Refactor SpanRef some more
Use a struct to represent SpanRef, rather than a reference.
* Flush buffers on background thread
* Use a span for UpdateRenderScale.
Much faster than copying the array.
* Calculate command size using reflection
* WIP parallel shaders
* Some minor optimisation
* Only 2 max refs per command now.
The command with 3 refs is gone. 😌
* Don't cast on the GPU side
* Remove redundant casts, force sync on window present
* Fix Shader Cache
* Fix host shader save.
* Fixup to work with new renderer stuff
* Make command Run static, use array of delegates as lookup
Profile says this takes less time than the previous way.
* Bring up to date
* Add settings toggle. Fix Muiltithreading Off mode.
* Fix warning.
* Release tracking lock for flushes
* Fix Conditional Render fast path with threaded gal
* Make handle iteration safe when releasing the lock
This is mostly temporary.
* Attempt to set backend threading on driver
Only really works on nvidia before launching a game.
* Fix race condition with BufferModifiedRangeList, exceptions in tracking actions
* Update buffer set commands
* Some cleanup
* Only use stutter workaround when using opengl renderer non-threaded
* Add host-conditional reservation of counter events
There has always been the possibility that conditional rendering could use a query object just as it is disposed by the counter queue. This change makes it so that when the host decides to use host conditional rendering, the query object is reserved so that it cannot be deleted. Counter events can optionally start reserved, as the threaded implementation can reserve them before the backend creates them, and there would otherwise be a short amount of time where the counter queue could dispose the event before a call to reserve it could be made.
* Address Feedback
* Make counter flush tracked again.
Hopefully does not cause any issues this time.
* Wait for FlushTo on the main queue thread.
Currently assumes only one thread will want to FlushTo (in this case, the GPU thread)
* Add SDL2 headless integration
* Add HLE macro commands.
Co-authored-by: Mary <mary@mary.zone>
* Add support for HLE macros and accelerate MultiDrawElementsIndirectCount
* Add missing barrier
* Fix index buffer count
* Add support check for each macro hle before use
* Add missing xml doc
Co-authored-by: gdkchan <gab.dark.100@gmail.com>
This greatly reduces memory usage in games that aggressively reuse memory without removing dead textures from the pool, such as the Xenoblade games, UE3 games, and to a lesser extent, UE4/unity games.
This change stops memory usage from ballooning in xenoblade and some other games. It will also reduce texture view/dependency complexity in some games - for example in MK8D it will reduce the number of surface copies between lighting cubemaps generated for actors.
There shouldn't be any performance impact from doing this, though the deletion and creation of textures could be improved by improving the OpenGL texture storage cache, which is very simple and limited right now. This will be improved in future.
Another potential error has been fixed with the texture cache, which could prevent data loss when data is interchangably written to textures from both the GPU and CPU. It was possible that the dirty flag for a texture would be consumed without the data being synchronized on next use, due to the old overlap check. This check no longer consumes the dirty flag.
Please test a bunch of games to make sure they still work, and there are no performance regressions.
Got this the wrong way round - was causing games to try synchronize mipmap levels of like 52 on a 3d texture with 6 levels. Also, corrected the variable name in the method that _was_ working.
* Use "Undesired" scale mode for certain textures rather than blacklisting
* Nit
Co-authored-by: gdkchan <gab.dark.100@gmail.com>
Co-authored-by: gdkchan <gab.dark.100@gmail.com>
* Replace BGRA and scale uniforms with a uniform block
* Setting the data again on program change is no longer needed
* Optimize and resolve some warnings
* Avoid redundant support buffer updates
* Some optimizations to BindBuffers (now inlined)
* Unify render scale arrays
* Use a new approach for shader BRX targets
* Make shader cache actually work
* Improve the shader pattern matching a bit
* Extend LDC search to predecessor blocks, catches more cases
* Nit
* Only save the amount of constant buffer data actually used. Avoids crashes on partially mapped buffers
* Ignore Rd on predicate instructions, as they do not have a Rd register (catches more cases)