* Vulkan: Add Render Pass / Framebuffer Cache
Cache is owned by each texture view.
- Window's way of getting framebuffer cache for swapchain images is really messy - it creates a TextureView out of just a vk image view, with invalid info and no storage.
* Clear up limited use of alternate TextureView constructor
* Formatting and messages
* More formatting and messages
I apologize for `_colorsCanonical[index]?.Storage?.InsertReadToWriteBarrier`, the compiler made me do it
* Self review, change GetFramebuffer to GetPassAndFramebuffer
* Avoid allocations on Remove for HashTableSlim
* Member can be readonly
* Generate texture create info for swapchain images
* Improve hashcode
* Remove format, samples, size and isDepthStencil when possible
Tested in a number of games, seems fine.
* Removed load op barriers
These can be introduced later.
* Reintroduce UpdateModifications
Technically meant to be replaced by load op stuff.
* Allow skipping draws with broken pipeline variants on Vulkan
* Move IsLinked check to CreatePipeline
* Restore throw on error behaviour for background compile
* Can't remove SetAlphaTest pragmas yet
* Double new line
* Replace vendor id lookup with driver name
* Create separate field for driver name, handle OpenGL
* Document changes in VulkanPhysicalDevice.cs
* Always display driver over vendor
* Replace Vulkan 1.2 requirement with VK_KHR_driver_properties
* Remove empty line
* Remove redundant unsafe block
* Apply suggestions from code review
---------
Co-authored-by: Ac_K <Acoustik666@gmail.com>
* Vulkan: Use staging buffer for temporary constants
Helper shaders and post processing effects typically need some parameters to tell them what to do, which we pass via constant buffers that are created and destroyed each time.
This can vary in cost between different Vulkan drivers. It shows up on profiles on mesa and MoltenVK, so it's worth avoiding. Some games only do it once (BlitColor for present), others multiple times. It's also done for post processing filters and FSR upscaling, which creates two buffers.
For mirrors, I added the ability to reserve a range on the staging buffer for use as any type of binding. This PR allows these constant buffers to be instead temporarily allocated on the staging buffer, skipping allocation and buffer management costs entirely.
Two temporary allocations do remain:
- DrawTexture, because it doesn't have access to the command buffer scope
- Index buffer indirect conversion, because one of them is a storage buffer and thus is a little more complicated.
There's a small cost in that the uniform buffer takes up more space due to alignment requirements. At worst that's 256 bytes (on a GTX 1070) but more modern GPUs should have a better time.
Worth testing across different games and post effects to make sure they still work.
* Use temporary buffer for ConvertIndexBufferIndirect
* Simplify alignment passing for now
* Fix shader params length for CopyIncompatibleFormats
* Set data for helpershaders without overlap checks
The data is in the staging buffer, so its usage range is guarded using that.
Turns out that ElementAt for Queue<T> runs the default implementation as it doesn't implement IList, which enumerates elements of the queue up to the given index. This code was creating `count` enumerators and iterating way more queue items than it needed to at higher counts. The solution is just to use one enumerator and break out of the loop when we get the count that we need.
3.5% of backend time was being spent _just_ enumerating at the usual spot in SMO.
This prevents a small allocation each time this method is called. This is a top 3 SOH allocation during gameplay in most games, and eliminating it is pretty free.
* Pass MultiRange to BufferManager
* Implement support for multi-range buffers using Vulkan sparse mappings
* Use multi-range for remaining buffers, delete old methods
* Assume that more buffers are contiguous
* Dispose multi-range buffers after they are removed from the list
* Properly init BufferBounds for constant and storage buffers
* Do not try reading zero bytes data from an unmapped address on the shader cache + PR feedback
* Fix misaligned sparse buffer offsets
* Null check can be simplified
* PR feedback
* Change TargetFramework to net8.0
* Disable info messages
* Fix warings
* Disable additional analyzer messages
* Fix typo
* Add whitespace
* Fix ref vs in warnings
* Use explicit [In] on array parameters
* No need to guard Remove with Contains
* Use 'ArgumentOutOfRangeException.ThrowIf...' instead of explicitly throwing a new exception instance
* Bump .NET SDK version
* Enable JsonSerializerIsReflectionEnabledByDefault
* Use 8.0.100 GA release
* Bump System package versions
---------
Co-authored-by: Zoltan Csizmadia <Zoltan.Csizmadia@vericast.com>
* GPU: Add fallback when textureGatherOffsets is not supported.
This PR adds a fallback for GPUs or APIs that don't support an equivalent to the method `textureGatherOffsets`, where each of the 4 gathered texels has an individual offset. This is done by reusing the existing code to handle non-const offsets for texture instructions, though it has also been corrected as there were a few implementation issues.
MoltenVK reports support for this capability, and it didn't error when we initially released the MacOS build, but that has since changed. MVK still reports support, but spirv-cross has been fixed in a way that it _attempts_ to use this capability, but the metal compiler errors since it doesn't exist.
Some other fixes:
- textureGatherOffsets emulation has been changed significantly. It now uses 4 texture sample instructions (not gather), calculates a base texel (i=0 j=0) and adds the offsets onto it before converting into a tex coord. The final result is offset into a texel center, so it shouldn't be subject to interpolation, though this isn't perfect and could have some error with floating point formats with linear sampling. It is subject to texture wrap mode as it should be, which is why texelFetch was not used.
- Maybe gather should be used here with component `w` (i=0, j=0), though this multiplies number of texels fetched by 4... The way it was doing this before _was_ wrong_, but doing it right would avoid issues with texel center precision.
- textureGatherOffset (singular) now performs textureGather with the offset applied to the coords, rather than the slower fallback where each texel is fetched individually.
* Increment shader cache version, remove unused arg
* Use base texture size for gather coord offset.
Implicit LOD for gather is not supported.
* Use 4 texture gathers for offsets emulation
Avoids issues with interpolation at cost of performance
(not sure how bad this is)
* Address Feedback
* Reduce the amount of descriptor pool allocations on Vulkan
* Formatting
* Slice can be simplified
* Make GetDescriptorPoolSizes static
* Adjust CanFit calculation so that TryAllocateDescriptorSets never fails
* Remove unused field
* Replace image barriers inside render pass with more generic memory barrier
* Remove forceStorage since it was creating images with storage bit for formats that are not StorageImage compatible
* Add missing flags on subpass dependency
* Don't call vkCmdSetScissor with a scissor count of 0
* One semaphore per swapchain image
* Remove compute stage from read to write barriers
* Try to improve Pipeline.Barrier nonsense
* Set PipelineStateFlags based on supported stages
Just some simple changes to the buffer conversion shaders. (stride conversion, D32S8 to D24S8)
The first change is using a device local buffer for converted vertex buffers, since they're only read/written on the GPU. These paths don't trigger on NVIDIA, but if you force them to use it demonstrates the full extent writing to host owned memory from compute absolutely destroys them. AMD GPUs are less heavily affected by this issue, but since the game in question was writing 230MB from compute, I imagine it should have some effect.
The second change is allowing the buffer conversion shaders to scale their work group count. While dividing the work between 32 invocations works OK for M1 macs, it's not so great for anything with more cores like AMD GPUs, which should be able to do a lot more parallel copies. Now, it scales by roughly 100 elements per invocation.
Some stride change cases could be improved further by either limiting vertex buffer size somehow (reading the index buffer could help, but is always risky) or only updating regions that changed, rather than invalidating the whole thing.
* Implement vertex and geometry shader conversion to compute
* Call InitializeReservedCounts for compute too
* PR feedback
* Set clip distance mask for geometry and tessellation shaders too
* Transform feedback emulation only for vertex
* Move shuffle handling out of the backend to a transform pass
* Handle subgroup sizes higher than 32
* Stop using the subgroup size control extension
* Make GenerateShuffleFunction static
* Shader cache version bump
* Vulkan: Periodically free regions of the staging buffer
There was an edge case where a game could submit tens of thousands of small copies over the course of over half a minute to unique fences. This could result in a large stutter when the staging buffer became full and it tried to check and free thousands of completed fences.
This became visible with some games and mirrors on Windows, as they don't submit any buffer data via the staging buffer, but may submit copies of the support buffer.
This change makes the Vulkan backend check for staging buffer completion on each command buffer submit, so it can't get backed up with 1000s of copies to check.
* Add comment
* Initial implementation of buffer mirrors
Generally slower right now, goal is to reduce render passes in games that do inline updates
Fix support buffer mirrors
Reintroduce vertex buffer mirror
Add storage buffer support
Optimisation part 1
More optimisation
Avoid useless data copies.
Remove unused cbIndex stuff
Properly set write flag for storage buffers.
Fix minor issues
Not sure why this was here.
Fix BufferRangeList
Fix some big issues
Align storage buffers rather than getting full buffer as a range
Improves mirrorability of read-only storage buffers
Increase staging buffer size, as it now contains mirrors
Fix some issues with buffers not updating
Fix buffer SetDataUnchecked offset for one of the paths when using mirrors
Fix buffer mirrors interaction with buffer textures
Fix mirror rebinding
Move GetBuffer calls on indirect draws before BeginRenderPass to avoid draws without render pass
Fix mirrors rebase
Fix rebase 2023
* Fix crash when using stale vertex buffer
Similar to `Get` with a size that's too large, just treat it as a clamp.
* Explicitly set support buffer as mirrorable
* Address feedback
* Remove unused fragment of MVK workaround
* Replace logging for staging buffer OOM
* Address format issues
* Address more format issues
* Mini cleanup
* Address more things
* Rename BufferRangeList
* Support bounding range for ClearMirrors and UploadPendingData
* Add maximum size for vertex buffer mirrors
* Enable index buffer mirrors
Enabled on all platforms for the IbStreamer.
* Feedback
* Remove mystery BufferCache change
Probably macos related?
* Fix mirrors not creating when staging buffer is empty.
* Change log level to debug
* Add workflow to perform automated checks for PRs
* Downgrade Microsoft.CodeAnalysis to 4.4.0
This is a workaround to fix issues with dotnet-format.
See:
- https://github.com/dotnet/format/issues/1805
- https://github.com/dotnet/format/issues/1800
* Adjust editorconfig to be more compatible with Ryujinx code-style
* Adjust .editorconfig line endings to match .gitattributes
* Disable 'prefer switch expression' rule
* Remove naming styles
These are the default rules, so we don't need to override them.
* Silence IDE0060 in .editorconfig
* Slightly adjust .editorconfig
* Add lost workflow changes
* Move .editorconfig comment to the top
* .editorconfig: private static readonly fields should be _lowerCamelCase
* .editorconfig: Remove alignment for declarations as well
* editorconfig: Add rule for local constants
* Disable CA1822 for HLE services
* Disable CA1822 for ViewModels
Bindings won't work with static members, but this issue is silently ignored.
* Run dotnet format for the whole solution
* Check result code of SDL_GetDisplayBounds
* Fix dotnet format style issues
* Add missing trailing commas
* Update Microsoft.CodeAnalysis.CSharp to 4.6.0
Skipping 4.5.0 since it breaks dotnet format
* Restore old default naming rules for dotnet format
* Add naming rule exception for CPU tests
* checks: Include all files before excluding paths
* Fix dotnet format issues
* Check dotnet format version
* checks: Run dotnet format with severity info again
* checks: Disable naming style rules until they won't crash the process anymore
* Remove unread private member
* checks: Attempt to run analyzers 3 times before giving up
* checks: Enable naming style rules again with the new retry logic
* Fix some validation errors and silence the annoying pipeline barrier error
* Remove bogus decref/incref on index buffer state
* Make unsafe blit opt-in rather than opt-out
* Remove Vulkan debugger messages blacklist
* Adjust GetImageUsage to not set the storage bit for multisample textures if not supported
* Move support buffer update out of the backends
* Fix render scale init and remove redundant state from SupportBufferUpdater
* Stop passing texture scale to the backends
* XML docs for SupportBufferUpdater
* dotnet format style --severity info
Some changes were manually reverted.
* dotnet format analyzers --serverity info
Some changes have been minimally adapted.
* Restore a few unused methods and variables
* Silence dotnet format IDE0060 warnings
* Silence dotnet format IDE0059 warnings
* Address dotnet format CA1816 warnings
* Fix new dotnet-format issues after rebase
* Address most dotnet format whitespace warnings
* Apply dotnet format whitespace formatting
A few of them have been manually reverted and the corresponding warning was silenced
* Format if-blocks correctly
* Another rebase, another dotnet format run
* Run dotnet format whitespace after rebase
* Run dotnet format style after rebase
* Run dotnet format analyzers after rebase
* Run dotnet format style after rebase
* Run dotnet format after rebase and remove unused usings
- analyzers
- style
- whitespace
* Disable 'prefer switch expression' rule
* Add comments to disabled warnings
* Simplify properties and array initialization, Use const when possible, Remove trailing commas
* Run dotnet format after rebase
* Address IDE0251 warnings
* Address a few disabled IDE0060 warnings
* Silence IDE0060 in .editorconfig
* Revert "Simplify properties and array initialization, Use const when possible, Remove trailing commas"
This reverts commit 9462e4136c0a2100dc28b20cf9542e06790aa67e.
* dotnet format whitespace after rebase
* First dotnet format pass
* Fix naming rule violations
* Remove redundant code
* Rename generics
* Address review feedback
* Remove SetOrigin
* Implement transform feedback emulation for hardware without native support
* Stop doing some useless buffer updates and account for non-zero base instance
* Reduce redundant updates even more
* Update descriptor init logic to account for ResourceLayout
* Fix transform feedback and storage buffers not being updated in some cases
* Shader cache version bump
* PR feedback
* SetInstancedDrawVertexCount must be always called after UpdateState
* Minor typo
* Implement storage buffer operations using new Load/Store instruction
* Extend GenerateMultiTargetStorageOp to also match access with constant offset, and log and comments
* Remove now unused code
* Catch more complex cases of global memory usage
* Shader cache version bump
* Extend global access elimination to work with more shared memory cases
* Change alignment requirement from 16 bytes to 8 bytes, handle cases where we need more than 16 storage buffers
* Tweak preferencing to catch more cases
* Enable CB0 elimination even when host storage buffer alignment is > 16 (for Intel)
* Fix storage buffer bindings
* Simplify some code
* Shader cache version bump
* Fix typo
* Extend global memory elimination to handle shared memory with multiple possible offsets and local memory