src/video_core/renderer_opengl/texture_filters/bicubic/bicubic.cpp:51:86: error: cannot initialize a parameter of type 'GLuint' (aka 'unsigned int') with an rvalue of type 'nullptr_t'
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, NULL, 0);
^~~~
src/video_core/renderer_opengl/texture_filters/xbrz/xbrz_freescale.cpp:95:86: error: cannot initialize a parameter of type 'GLuint' (aka 'unsigned int') with an rvalue of type 'nullptr_t'
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, NULL, 0);
^~~~
/usr/include/sys/_null.h:37:14: note: expanded from macro 'NULL'
#define NULL nullptr
^~~~~~~
The main problem is the loss of compatibility with some controllers, but there are also
unwanted changes to the behaviour of PS4 controllers (hardcoded lightbar color).
The file's size is stored in FileSessionSlot and retrieved when the game calls GetSize. However, it is not updated when the file is written to, which can possibly change the file size. Therefore, this can cause GetSize to return incorrect results.
According to HW tests, this vsync event is signaled for activated cameras at about the same frequency as the frame rate. The last 5 vsync timings are recorded (in microseconds) and can be retrieved with the service function.
Also, corrected the default frame_rate to 15, according to HW test.
This should fix the missing camera images in certain games.
The default is discrete_interval which has dynamic open-ness.
We only use right_open intervals anyway. In theory this could allow some compile-time optimizations.
You can now directly place ExeFS overrides/patches inside the mod folder (instead of the exefs subfolder). This allows us to have drop-in compatibility with Luma3DS mods.
This is the main dialog of video dumping. This dialog allows the user to set output format, output path, video/audio encoder and video/audio bitrate.
When a format is selected, the list of video and audio encoders are updated. Only encoders of codecs that can be contained in the format is shown.
This dialog allows changing the value and unsetting one option. There are three possible variants of this dialog:
1. The LineEdit layout. This is used for normal options like string and duration, and just features a textbox for the user to type in whatever they want to set.
2. The ComboBox layout. This is used when there are named constants for an option, or when the option accepts an enum value like sample_format or pixel_format. A description will be displayed for the currently selected named constant. The user can also select 'custom' and type in their own value.
3. The CheckBox-es layout. This is used for flags options. A checkbox will be displayed for each named constant and the user can tick the flags they want to set.
These two functions allow the frontend to get a list of encoders/formats and their specific options.
Retrieving the options is harder than it sounds due to FFmpeg's strange AVClass and AVOption system. For example, for integer and flags options, 'named constants' can be set. They are of type `AV_OPT_TYPE_CONST` and are categoried according to the `unit` field. An option can recognize all constants of the same `unit`.
Previously, we just used the native sample rate for encoding. However, some encoders like libmp3lame doesn't support it. Therefore, we now use a supported sample rate (preferring the native one if possible).
FFmpeg requires audio data to be sent in a sequence of frames, each containing the same specific number of samples. Previously, we buffered input samples in FFmpegBackend. However, as the source and destination sample rates can now be different, we should buffer resampled data instead. swresample have an internal input buffer, so we now just forward all data to it and 'gradually' receive resampled data, at most one frame_size at a time. When there is not enough resampled data to form a frame, we will record the current offset and request for less data on the next call.
Additionally, this commit also fixes a flaw. When an encoder supports variable frame sizes, its frame size is reported to be 0, which breaks our buffering system. Now we treat variable frame size encoders as having a frame size of 160 (the size of a HLE audio frame).
We previously assumed that the first preferred sample format is planar, but that may not be true for all codecs. Instead we should find a supported sample format that is planar.
While YUV420P is widely used, not all encoders accept it (e.g. Intel QSV only accepts NV12). We should use the codec's preferred pixel format instead as we need to rescale the frame anyway.