Summary: Ref T7149. We can't compute hashes of large files efficiently, but we can resume uploads by the same author, with the same name and file size, which are only partially completed. This seems like a reasonable heuristic that is unlikely to ever misfire, even if it's a little magical.
Test Plan:
- Forced chunking on.
- Started uploading a chunked file.
- Closed the browser window.
- Dropped it into a new window.
- Upload resumed //(!!!)//
- Did this again.
- Downloaded the final file, which successfully reconstructed the original file.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: joshuaspence, chad, epriestley
Maniphest Tasks: T7149
Differential Revision: https://secure.phabricator.com/D12070
Summary:
Ref T7149. This adds chunking support to drag-and-drop uploads. It never activates right now unless you hack things up, since the chunk engine is still hard-coded as disabled.
The overall approach is the same as `arc upload` in D12061, with some slight changes to the API return values to avoid a few extra HTTP calls.
Test Plan:
- Enabled chunk engine.
- Uploaded some READMEs in a bunch of tiny 32 byte chunks.
- Worked out of the box in Safari, Chrome, Firefox.
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: epriestley
Maniphest Tasks: T7149
Differential Revision: https://secure.phabricator.com/D12066
Summary:
Ref T7149. This isn't complete and isn't active yet, but does basically work. I'll shore it up in the next few diffs.
The new workflow goes like this:
> Client, file.allocate(): I'd like to upload a file with length L, metadata M, and hash H.
Then the server returns `upload` (a boolean) and `filePHID` (a PHID). These mean:
| upload | filePHID | means |
|---|---|---|
| false | false | Server can't accept file.
| false | true | File data already known, file created from hash.
| true | false | Just upload normally.
| true | true | Query chunks to start or resume a chunked upload.
All but the last case are uninteresting and work like exising uploads with `file.uploadhash` (which we can eventually deprecate).
In the last case:
> Client, file.querychunks(): Give me a list of chunks that I should upload.
This returns all the chunks for the file. Chunks have a start byte, an end byte, and a "complete" flag to indicate that the server already has the data.
Then, the client fills in chunks by sending them:
> Client, file.uploadchunk(): Here is the data for one chunk.
This stuff doesn't work yet or has some caveats:
- I haven't tested resume much.
- Files need an "isPartial()" flag for partial uploads, and the UI needs to respect it.
- The JS client needs to become chunk-aware.
- Chunk size is set crazy low to make testing easier.
- Some debugging flags that I'll remove soon-ish.
- Downloading works, but still streams the whole file into memory.
- This storage engine is disabled by default (hardcoded as a unit test engine) because it's still sketchy.
- Need some code to remove the "isParital" flag when the last chunk is uploaded.
- Maybe do checksumming on chunks.
Test Plan:
- Hacked up `arc upload` (see next diff) to be chunk-aware and uploaded a readme in 18 32-byte chunks. Then downloaded it. Got the same file back that I uploaded.
- File UI now shows some basic chunk info for chunked files:
{F336434}
Reviewers: btrahan
Reviewed By: btrahan
Subscribers: joshuaspence, epriestley
Maniphest Tasks: T7149
Differential Revision: https://secure.phabricator.com/D12060