* common: Move settings to common from core.
- Removes a dependency on core and input_common from common.
* code: Wrap settings values
* Port from yuzu to allow per game settings
* citra_qt: Initial per-game settings dialog
* citra_qt: Use new API for read/save of config values
* citra_qt: Per game audio settings
* citra_qt: Per game graphics settings
* citra_qt: Per game system settings
* citra_qt: Per game general settings
* citra_qt: Document and run clang format
* citra_qt: Make icon smaller and centered
* citra_qt: Remove version number
* Not sure how to extract that, can always add it back later
* citra_qt: Wrap UISettings
* citra_qt: Fix unthottled fps setting
* citra_qt: Remove margin in emulation tab
* citra_qt: Implement some suggestions
* Bring back speed switch hotkey
* Allow configuration when game is running
* Rename/adjust UI stuff
* citra_qt: Fix build with separate windows
* citra_qt: Address feedback
* citra_qt: Log per-game settings before launching games
* citra_qt: Add shader cache options
* Also fix android build
* citra_qt: Add DLC menu option
* citra_qt: Run clang-format
* citra_qt: Adjust for time offset
* citra_qt: Implement suggestions
* Run clang-format
Co-authored-by: bunnei <bunneidev@gmail.com>
The most important one being adding a mutex to protect the format_context. Apparently it wasn't thread safe (as one'd expect) but I didn't think about that.
Should fix some of the strange issues happening with MP4 muxers, etc.
These two functions allow the frontend to get a list of encoders/formats and their specific options.
Retrieving the options is harder than it sounds due to FFmpeg's strange AVClass and AVOption system. For example, for integer and flags options, 'named constants' can be set. They are of type `AV_OPT_TYPE_CONST` and are categoried according to the `unit` field. An option can recognize all constants of the same `unit`.
Previously, we just used the native sample rate for encoding. However, some encoders like libmp3lame doesn't support it. Therefore, we now use a supported sample rate (preferring the native one if possible).
FFmpeg requires audio data to be sent in a sequence of frames, each containing the same specific number of samples. Previously, we buffered input samples in FFmpegBackend. However, as the source and destination sample rates can now be different, we should buffer resampled data instead. swresample have an internal input buffer, so we now just forward all data to it and 'gradually' receive resampled data, at most one frame_size at a time. When there is not enough resampled data to form a frame, we will record the current offset and request for less data on the next call.
Additionally, this commit also fixes a flaw. When an encoder supports variable frame sizes, its frame size is reported to be 0, which breaks our buffering system. Now we treat variable frame size encoders as having a frame size of 160 (the size of a HLE audio frame).
We previously assumed that the first preferred sample format is planar, but that may not be true for all codecs. Instead we should find a supported sample format that is planar.
While YUV420P is widely used, not all encoders accept it (e.g. Intel QSV only accepts NV12). We should use the codec's preferred pixel format instead as we need to rescale the frame anyway.
This uses the mailbox model to move pixel downloading to its own thread, eliminating Nvidia's warnings and (possibly) making use of GPU copy engine.
To achieve this, we created a new mailbox type that is different from the presentation mailbox in that it never discards a rendered frame.
Also, I tweaked the projection matrix thing so that it can just draw the frame upside down instead of having the CPU flip it.