- Add missing virtual destructor on `SSLBackend`.
- On Windows, filter out `POLLWRBAND` (one of the new flags added) when
calling `WSAPoll`, because despite the constant being defined on
Windows, passing it calls `WSAPoll` to yield `EINVAL`.
- Reduce OpenSSL version requirement to satisfy CI; I haven't tested
whether it actually builds (or runs) against 1.1.1, but if not, I'll
figure it out.
- Change an instance of memcpy to memmove, even though the arguments
cannot overlap, to avoid a [strange GCC
error](https://github.com/yuzu-emu/yuzu/pull/10912#issuecomment-1606283351).
This implements some missing network APIs including a large chunk of the SSL
service, enough for Mario Maker (with an appropriate mod applied) to connect to
the fan server [Open Course World](https://opencourse.world/).
Connecting to first-party servers is out of scope of this PR and is a
minefield I'd rather not step into.
## TLS
TLS is implemented with multiple backends depending on the system's 'native'
TLS library. Currently there are two backends: Schannel for Windows, and
OpenSSL for Linux. (In reality Linux is a bit of a free-for-all where there's
no one 'native' library, but OpenSSL is the closest it gets.) On macOS the
'native' library is SecureTransport but that isn't implemented in this PR.
(Instead, all non-Windows OSes will use OpenSSL unless disabled with
`-DENABLE_OPENSSL=OFF`.)
Why have multiple backends instead of just using a single library, especially
given that Yuzu already embeds mbedtls for cryptographic algorithms? Well, I
tried implementing this on mbedtls first, but the problem is TLS policies -
mainly trusted certificate policies, and to a lesser extent trusted algorithms,
SSL versions, etc.
...In practice, the chance that someone is going to conduct a man-in-the-middle
attack on a third-party game server is pretty low, but I'm a security nerd so I
like to do the right security things.
My base assumption is that we want to use the host system's TLS policies. An
alternative would be to more closely emulate the Switch's TLS implementation
(which is based on NSS). But for one thing, I don't feel like reverse
engineering it. And I'd argue that for third-party servers such as Open Course
World, it's theoretically preferable to use the system's policies rather than
the Switch's, for two reasons
1. Someday the Switch will stop being updated, and the trusted cert list,
algorithms, etc. will start to go stale, but users will still want to
connect to third-party servers, and there's no reason they shouldn't have
up-to-date security when doing so. At that point, homebrew users on actual
hardware may patch the TLS implementation, but for emulators it's simpler to
just use the host's stack.
2. Also, it's good to respect any custom certificate policies the user may have
added systemwide. For example, they may have added custom trusted CAs in
order to use TLS debugging tools or pass through corporate MitM middleboxes.
Or they may have removed some CAs that are normally trusted out of paranoia.
Note that this policy wouldn't work as-is for connecting to first-party
servers, because some of them serve certificates based on Nintendo's own CA
rather than a publicly trusted one. However, this could probably be solved
easily by using appropriate APIs to adding Nintendo's CA as an alternate
trusted cert for Yuzu's connections. That is not implemented in this PR
because, again, first-party servers are out of scope.
(If anything I'd rather have an option to _block_ connections to Nintendo
servers, but that's not implemented here.)
To use the host's TLS policies, there are three theoretical options:
a) Import the host's trusted certificate list into a cross-platform TLS
library (presumably mbedtls).
b) Use the native TLS library to verify certificates but use a cross-platform
TLS library for everything else.
c) Use the native TLS library for everything.
Two problems with option a). First, importing the trusted certificate list at
minimum requires a bunch of platform-specific code, which mbedtls does not have
built in. Interestingly, OpenSSL recently gained the ability to import the
Windows certificate trust store... but that leads to the second problem, which
is that a list of trusted certificates is [not expressive
enough](https://bugs.archlinux.org/task/41909) to express a modern certificate
trust policy. For example, Windows has the concept of [explicitly distrusted
certificates](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn265983(v=ws.11)),
and macOS requires Certificate Transparency validation for some certificates
with complex rules for when it's required.
Option b) (using native library just to verify certs) is probably feasible, but
it would miss aspects of TLS policy other than trusted certs (like allowed
algorithms), and in any case it might well require writing more code, not less,
compared to using the native library for everything.
So I ended up at option c), using the native library for everything.
What I'd *really* prefer would be to use a third-party library that does option
c) for me. Rust has a good library for this,
[native-tls](https://docs.rs/native-tls/latest/native_tls/). I did search, but
I couldn't find a good option in the C or C++ ecosystem, at least not any that
wasn't part of some much larger framework. I was surprised - isn't this a
pretty common use case? Well, many applications only need TLS for HTTPS, and they can
use libcurl, which has a TLS abstraction layer internally but doesn't expose
it. Other applications only support a single TLS library, or use one of the
aforementioned larger frameworks, or are platform-specific to begin with, or of
course are written in a non-C/C++ language, most of which have some canonical
choice for TLS. But there are also many applications that have a set of TLS
backends just like this; it's just that nobody has gone ahead and abstracted
the pattern into a library, at least not a widespread one.
Amusingly, there is one TLS abstraction layer that Yuzu already bundles: the
one in ffmpeg. But it is missing some features that would be needed to use it
here (like reusing an existing socket rather than managing the socket itself).
Though, that does mean that the wiki's build instructions for Linux (and macOS
for some reason?) already recommend installing OpenSSL, so no need to update
those.
## Other APIs implemented
- Sockets:
- GetSockOpt(`SO_ERROR`)
- SetSockOpt(`SO_NOSIGPIPE`) (stub, I have no idea what this does on Switch)
- `DuplicateSocket` (because the SSL sysmodule calls it internally)
- More `PollEvents` values
- NSD:
- `Resolve` and `ResolveEx` (stub, good enough for Open Course World and
probably most third-party servers, but not first-party)
- SFDNSRES:
- `GetHostByNameRequest` and `GetHostByNameRequestWithOptions`
- `ResolverSetOptionRequest` (stub)
## Fixes
- Parts of the socket code were previously allocating a `sockaddr` object on
the stack when calling functions that take a `sockaddr*` (e.g. `accept`).
This might seem like the right thing to do to avoid illegal aliasing, but in
fact `sockaddr` is not guaranteed to be large enough to hold any particular
type of address, only the header. This worked in practice because in
practice `sockaddr` is the same size as `sockaddr_in`, but it's not how the
API is meant to be used. I changed this to allocate an `sockaddr_in` on the
stack and `reinterpret_cast` it. I could try to do something cleverer with
`aligned_storage`, but casting is the idiomatic way to use these particular
APIs, so it's really the system's responsibility to avoid any aliasing
issues.
- I rewrote most of the `GetAddrInfoRequest[WithOptions]` implementation. The
old implementation invoked the host's getaddrinfo directly from sfdnsres.cpp,
and directly passed through the host's socket type, protocol, etc. values
rather than looking up the corresponding constants on the Switch. To be
fair, these constants don't tend to actually vary across systems, but
still... I added a wrapper for `getaddrinfo` in
`internal_network/network.cpp` similar to the ones for other socket APIs, and
changed the `GetAddrInfoRequest` implementation to use it. While I was at
it, I rewrote the serialization to use the same approach I used to implement
`GetHostByNameRequest`, because it reduces the number of size calculations.
While doing so I removed `AF_INET6` support because the Switch doesn't
support IPv6; it might be nice to support IPv6 anyway, but that would have to
apply to all of the socket APIs.
I also corrected the IPC wrappers for `GetAddrInfoRequest` and
`GetAddrInfoRequestWithOptions` based on reverse engineering and hardware
testing. Every call to `GetAddrInfoRequestWithOptions` returns *four*
different error codes (IPC status, getaddrinfo error code, netdb error code,
and errno), and `GetAddrInfoRequest` returns three of those but in a
different order, and it doesn't really matter but the existing implementation
was a bit off, as I discovered while testing `GetHostByNameRequest`.
- The new serialization code is based on two simple helper functions:
```cpp
template <typename T> static void Append(std::vector<u8>& vec, T t);
void AppendNulTerminated(std::vector<u8>& vec, std::string_view str);
```
I was thinking there must be existing functions somewhere that assist with
serialization/deserialization of binary data, but all I could find was the
helper methods in `IOFile` and `HLERequestContext`, not anything that could
be used with a generic byte buffer. If I'm not missing something, then
maybe I should move the above functions to a new header in `common`...
right now they're just sitting in `sfdnsres.cpp` where they're used.
- Not a fix, but `SocketBase::Recv`/`Send` is changed to use `std::span<u8>`
rather than `std::vector<u8>&` to avoid needing to copy the data to/from a
vector when those methods are called from the TLS implementation.
The latest version of MSVC STL brings C++23 standard library modules, which conflict with precompiled headers.
Disabling with /experimental:module- has no effect, so force C++20 in the meantime while we wait for module support in other compilers.
Currently the exported version of lz4 provided by vcpkg is malformed and
is "unknown". This makes querying for a specific version broken.
Fixes configuring CMake with the use of vcpkg.
Uses find_package_handle_standard_args to handle the find_package call
from the root CMakeLists. Removes all the unnecessary logic after the
find_package and just sets it to REQUIRED.
This PR rearranges things in the CMake system to make compiling with Qt6 possible
1. Camera API has changed in Qt6, so the camera feature is disabled
2. A previous fix involving QLocale is now version gated.
3. QRegExp replaced with QRegularExpression, see #5343
4. Qt6_LOCATION option added to specify a location to search for Qt6
(see examples below)
5. windeployqt is used to copy Qt6 files into the build directory on Windows
Notes for Arch Linux
Arch install happened to have qt6-base qt6-declarative qt6-translations installed
mkdir build && cd build
cmake .. -GNinja -DYUZU_USE_BUNDLED_VCPKG=ON -DYUZU_TESTS=OFF -DENABLE_QT6=YES -DYUZU_USE_BUNDLED_QT=NO
Windows (MSVC)
Qt wants users to download precompiled libraries via an online installer,
it is worth noting that the GPL/LGPL takes precendence over any ...
In the Qt Maintenance tool, under a version, such as 6.3.1
Select "MSVC 2019 64-bit"
Under Additional Libraries Qt Multimedia may be of use for Camera support
For the Web Applet I had to select the following:
PDF Positioning WebChannel WebEngine
mkdir build && cd build
cmake -G "Visual Studio 16 2019" -DQt6_LOCATION=C:/Qt/6.4.0/msvc2019_64/ \
-DENABLE_COMPATIBILITY_LIST_DOWNLOAD=YES -DYUZU_USE_BUNDLED_QT=NO \
-DENABLE_QT_TRANSLATION=YES -DENABLE_QT6=YES ..
Some numbers for reference (msvc2019_64)
Qt5 (slimmed down) 508 MB
Qt5.15.2 all in 929 MB
Qt6.3.1 1.71 GB
Qt6.3.2 1.73 GB
Qt6.4.0-beta3 1.83 GB
Qt6.4.0 1.67 GB
- Prevent sleep via xdg-desktop-portal after fa7abafa5f
- Pause on suspend after b7642cff36
- Exit on SIGINT/SIGTERM after 9479940a1f
- Improve dark themes after b51db12567
vcpkg: Add Catch2 2.13.9
Catch2 >= 3.0 is not compatible with earlier versions, and for now we
must override the desired version in our vcpkg manifest. We can do this
programmatically by using VCPKG_MANIFEST_FEATURES.
CMakeLists: Search for lz4 CONFIG mode first
vcpkg's lz4 CONFIG cmake script works in Release mode but not in Debug
mode, failing to copy the correct DLLs at compile time.
We still need to search for the regular mode for system-installed
versions.
CMakeLists: Clean up boost exports
Remove some Conan-specific workarounds.
CMakeLists: Use vcpkg for MSVC by default
Not enabling it generally since it's much easier to have system
dependencies installed for Linux and MinGW.
[REUSE] is a specification that aims at making file copyright
information consistent, so that it can be both human and machine
readable. It basically requires that all files have a header containing
copyright and licensing information. When this isn't possible, like
when dealing with binary assets, generated files or embedded third-party
dependencies, it is permitted to insert copyright information in the
`.reuse/dep5` file.
Oh, and it also requires that all the licenses used in the project are
present in the `LICENSES` folder, that's why the diff is so huge.
This can be done automatically with `reuse download --all`.
The `reuse` tool also contains a handy subcommand that analyzes the
project and tells whether or not the project is (still) compliant,
`reuse lint`.
Following REUSE has a few advantages over the current approach:
- Copyright information is easy to access for users / downstream
- Files like `dist/license.md` do not need to exist anymore, as
`.reuse/dep5` is used instead
- `reuse lint` makes it easy to ensure that copyright information of
files like binary assets / images is always accurate and up to date
To add copyright information of files that didn't have it I looked up
who committed what and when, for each file. As yuzu contributors do not
have to sign a CLA or similar I couldn't assume that copyright ownership
was of the "yuzu Emulator Project", so I used the name and/or email of
the commit author instead.
[REUSE]: https://reuse.software
Follow-up to 01cf05bc75
Between packages breaking, Conan always being a moving target for
minimum required CMake support, and now their moves to Conan 2.0 causing
existing packages to break, I suppose this was a long time coming. vcpkg
isn't without its drawbacks, but at the moment it seems easier on the
project to use for external packages.
Mostly removes the logic for Conan from the root CMakeLists file,
leaving basic find_package()'s in its place. Sets only the
find_package()'s that require CONFIG mode as necessary. clang and linux
CI now use the vcpkg toolchain file configured in the Docker container
when possible.
mingw CI turns off YUZU_TESTS because there's no way on the container to
run Windows executables on a Linux host anyway, and it's not easy to get
Catch2 there.
The AppStream file is mostly copied from the one already used by the
Flatpak yuzu build:
62fc225acf/org.yuzu_emu.yuzu.metainfo.xml
As it already defines the application id as org.yuzu_emu.yuzu I renamed
the yuzu.desktop and yuzu.xml files so that they match.
I've also made some minor tweaks to it, like fixing the capitalization
of "yuzu", adding a few keys and sorting them as presented in the
documentation.
Lastly, I added PrefersNonDefaultGPU=true to the .desktop file so that
yuzu is launched with the dedicated graphics card on Linux.
The premise behind ad55faaa3 was due to an issue between Conan's
libiconv package and compiling SDL2 from our externals. Since none of
our Conan externals require libiconv any longer, though, we can remove
downloading our own Boost package and just rely on Conan again.
Additionally, removing CONFIG from the find_package(boost) call fixes
issues with finding Boost on Fedora and MSYS2, which was the main
motivation for this.
Also, remove QUIET since if something goes wrong finding Boost, this
makes it harder to tell what went wrong.
* this resolves the todo items in the CMakeLists.txt
* a version requirement check for ffmpeg is added to catch issues early
* for future-proof reasons, nasm/yasm is now only required when build on
x86/AMD64 systems