1
0
Fork 0
mirror of https://we.phorge.it/source/phorge.git synced 2024-11-24 07:42:40 +01:00
phorge-phorge/support/aphlict/server/lib/AphlictAdminServer.js

198 lines
5.1 KiB
JavaScript
Raw Normal View History

'use strict';
var JX = require('./javelin').JX;
require('./AphlictListenerList');
var http = require('http');
Namespace Aphlict clients by request path, plus other fixes Summary: Fixes T7130. Fixes T7041. Fixes T7012. Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable. To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing. Also fix two unrelated issues: - Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes. - Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably. Test Plan: - Hit notification status page, saw version bump and instance/path name. - Saw instance/path name in client and server logs. - Stopped server, saw reconnects after 2, 4, 16, ... seconds. - Sent test notification; received test notification. - Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push. Reviewers: joshuaspence, btrahan Reviewed By: btrahan Subscribers: epriestley Maniphest Tasks: T7041, T7012, T7130 Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
var url = require('url');
JX.install('AphlictAdminServer', {
construct: function(server) {
this._startTime = new Date().getTime();
this._messagesIn = 0;
this._messagesOut = 0;
server.on('request', JX.bind(this, this._onrequest));
this._server = server;
this._clientServers = [];
},
properties: {
clientServers: null,
logger: null,
Support Aphlict clustering Summary: Ref T6915. This allows multiple notification servers to talk to each other: - Every server has a list of every other server, including itself. - Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB". - Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server. - Servers do not retransmit messages that they've already seen (already marked with their fingerprint). - Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen. This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time. The fingerprinting stops cycles, and stops servers from sending themselves copies of messages. We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue. Test Plan: - Wrote two server configs. - Started two servers. - Told Phabricator about all four services. - Loaded Chrome and Safari. - Saw them connect to different servers. - Sent messages in one, got notifications in the other (magic!). - Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc. (This pretty much just worked when I ran it the first time so I probably missed something?) {F1218835} Reviewers: chad Reviewed By: chad Maniphest Tasks: T6915 Differential Revision: https://secure.phabricator.com/D15711
2016-04-14 17:13:36 +02:00
peerList: null
},
members: {
_messagesIn: null,
_messagesOut: null,
_server: null,
_startTime: null,
getListenerLists: function(instance) {
var clients = this.getClientServers();
var lists = [];
for (var ii = 0; ii < clients.length; ii++) {
lists.push(clients[ii].getListenerList(instance));
}
return lists;
},
log: function() {
var logger = this.getLogger();
if (!logger) {
return;
}
logger.log.apply(logger, arguments);
return this;
},
listen: function() {
return this._server.listen.apply(this._server, arguments);
},
_onrequest: function(request, response) {
var self = this;
Namespace Aphlict clients by request path, plus other fixes Summary: Fixes T7130. Fixes T7041. Fixes T7012. Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable. To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing. Also fix two unrelated issues: - Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes. - Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably. Test Plan: - Hit notification status page, saw version bump and instance/path name. - Saw instance/path name in client and server logs. - Stopped server, saw reconnects after 2, 4, 16, ... seconds. - Sent test notification; received test notification. - Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push. Reviewers: joshuaspence, btrahan Reviewed By: btrahan Subscribers: epriestley Maniphest Tasks: T7041, T7012, T7130 Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
var u = url.parse(request.url, true);
var instance = u.query.instance || 'default';
// Publishing a notification.
Namespace Aphlict clients by request path, plus other fixes Summary: Fixes T7130. Fixes T7041. Fixes T7012. Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable. To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing. Also fix two unrelated issues: - Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes. - Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably. Test Plan: - Hit notification status page, saw version bump and instance/path name. - Saw instance/path name in client and server logs. - Stopped server, saw reconnects after 2, 4, 16, ... seconds. - Sent test notification; received test notification. - Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push. Reviewers: joshuaspence, btrahan Reviewed By: btrahan Subscribers: epriestley Maniphest Tasks: T7041, T7012, T7130 Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
if (u.pathname == '/') {
if (request.method == 'POST') {
var body = '';
request.on('data', function(data) {
body += data;
});
request.on('end', function() {
try {
var msg = JSON.parse(body);
self.log(
Namespace Aphlict clients by request path, plus other fixes Summary: Fixes T7130. Fixes T7041. Fixes T7012. Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable. To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing. Also fix two unrelated issues: - Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes. - Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably. Test Plan: - Hit notification status page, saw version bump and instance/path name. - Saw instance/path name in client and server logs. - Stopped server, saw reconnects after 2, 4, 16, ... seconds. - Sent test notification; received test notification. - Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push. Reviewers: joshuaspence, btrahan Reviewed By: btrahan Subscribers: epriestley Maniphest Tasks: T7041, T7012, T7130 Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
'Received notification (' + instance + '): ' +
JSON.stringify(msg));
++self._messagesIn;
try {
Support Aphlict clustering Summary: Ref T6915. This allows multiple notification servers to talk to each other: - Every server has a list of every other server, including itself. - Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB". - Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server. - Servers do not retransmit messages that they've already seen (already marked with their fingerprint). - Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen. This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time. The fingerprinting stops cycles, and stops servers from sending themselves copies of messages. We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue. Test Plan: - Wrote two server configs. - Started two servers. - Told Phabricator about all four services. - Loaded Chrome and Safari. - Saw them connect to different servers. - Sent messages in one, got notifications in the other (magic!). - Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc. (This pretty much just worked when I ran it the first time so I probably missed something?) {F1218835} Reviewers: chad Reviewed By: chad Maniphest Tasks: T6915 Differential Revision: https://secure.phabricator.com/D15711
2016-04-14 17:13:36 +02:00
self._transmit(instance, msg, response);
} catch (err) {
self.log(
'<%s> Internal Server Error! %s',
request.socket.remoteAddress,
err);
response.writeHead(500, 'Internal Server Error');
}
} catch (err) {
self.log(
'<%s> Bad Request! %s',
request.socket.remoteAddress,
err);
response.writeHead(400, 'Bad Request');
} finally {
response.end();
}
});
} else {
response.writeHead(405, 'Method Not Allowed');
response.end();
}
Namespace Aphlict clients by request path, plus other fixes Summary: Fixes T7130. Fixes T7041. Fixes T7012. Major change here is partitioning clients. In the Phacility cluster, being able to get a huge pile of instances on a single server -- without needing to run a process per instance -- is desirable. To accomplish this, just bucket clients by the path they connect with. This will let us set client URIs to `/instancename/` and then route connections to a small set of servers. This degrades cleanly in the common case and has no effect on installs which don't do instancing. Also fix two unrelated issues: - Fix the timeouts, which were incorrectly initializing in `open()` (which is called during reconnect, causing them to reset every time). Instead, initialize in the constructor. Cap timeout at 5 minutes. - Probably fix subscriptions, which were using a property with an object definition. Since this is by-ref, all concrete instances of the object share the same property, so all users would be subscribed to everything. Probably. Test Plan: - Hit notification status page, saw version bump and instance/path name. - Saw instance/path name in client and server logs. - Stopped server, saw reconnects after 2, 4, 16, ... seconds. - Sent test notification; received test notification. - Didn't explicitly test the subscription thing but it should be obvious by looking at `/notification/status/` shortly after a push. Reviewers: joshuaspence, btrahan Reviewed By: btrahan Subscribers: epriestley Maniphest Tasks: T7041, T7012, T7130 Differential Revision: https://secure.phabricator.com/D11769
2015-02-16 20:31:15 +01:00
} else if (u.pathname == '/status/') {
this._handleStatusRequest(request, response, instance);
} else {
response.writeHead(404, 'Not Found');
response.end();
}
},
_handleStatusRequest: function(request, response, instance) {
var active_count = 0;
var total_count = 0;
var lists = this.getListenerLists(instance);
for (var ii = 0; ii < lists.length; ii++) {
var list = lists[ii];
active_count += list.getActiveListenerCount();
total_count += list.getTotalListenerCount();
}
var server_status = {
'instance': instance,
'uptime': (new Date().getTime() - this._startTime),
'clients.active': active_count,
'clients.total': total_count,
'messages.in': this._messagesIn,
'messages.out': this._messagesOut,
'version': 7
};
response.writeHead(200, {'Content-Type': 'application/json'});
response.write(JSON.stringify(server_status));
response.end();
},
/**
* Transmits a message to all subscribed listeners.
*/
Support Aphlict clustering Summary: Ref T6915. This allows multiple notification servers to talk to each other: - Every server has a list of every other server, including itself. - Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB". - Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server. - Servers do not retransmit messages that they've already seen (already marked with their fingerprint). - Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen. This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time. The fingerprinting stops cycles, and stops servers from sending themselves copies of messages. We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue. Test Plan: - Wrote two server configs. - Started two servers. - Told Phabricator about all four services. - Loaded Chrome and Safari. - Saw them connect to different servers. - Sent messages in one, got notifications in the other (magic!). - Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc. (This pretty much just worked when I ran it the first time so I probably missed something?) {F1218835} Reviewers: chad Reviewed By: chad Maniphest Tasks: T6915 Differential Revision: https://secure.phabricator.com/D15711
2016-04-14 17:13:36 +02:00
_transmit: function(instance, message, response) {
var peer_list = this.getPeerList();
Support Aphlict clustering Summary: Ref T6915. This allows multiple notification servers to talk to each other: - Every server has a list of every other server, including itself. - Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB". - Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server. - Servers do not retransmit messages that they've already seen (already marked with their fingerprint). - Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen. This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time. The fingerprinting stops cycles, and stops servers from sending themselves copies of messages. We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue. Test Plan: - Wrote two server configs. - Started two servers. - Told Phabricator about all four services. - Loaded Chrome and Safari. - Saw them connect to different servers. - Sent messages in one, got notifications in the other (magic!). - Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc. (This pretty much just worked when I ran it the first time so I probably missed something?) {F1218835} Reviewers: chad Reviewed By: chad Maniphest Tasks: T6915 Differential Revision: https://secure.phabricator.com/D15711
2016-04-14 17:13:36 +02:00
message = peer_list.addFingerprint(message);
if (message) {
var lists = this.getListenerLists(instance);
for (var ii = 0; ii < lists.length; ii++) {
var list = lists[ii];
var listeners = list.getListeners();
this._transmitToListeners(list, listeners, message);
}
peer_list.broadcastMessage(instance, message);
}
Support Aphlict clustering Summary: Ref T6915. This allows multiple notification servers to talk to each other: - Every server has a list of every other server, including itself. - Every server generates a unique fingerprint at startup, like "XjeHuPKPBKHUmXkB". - Every time a server gets a message, it marks it with its personal fingerprint, then sends it to every other server. - Servers do not retransmit messages that they've already seen (already marked with their fingerprint). - Servers learn other servers' fingerprints after they send them a message, and stop sending them messages they've already seen. This is pretty crude, and the first message to a cluster will transmit N^2 times, but N is going to be like 3 or 4 in even the most extreme cases for a very long time. The fingerprinting stops cycles, and stops servers from sending themselves copies of messages. We don't need to do anything more sophisticated than this because it's fine if some notifications get lost when a server dies. Clients will reconnect after a short period of time and life will continue. Test Plan: - Wrote two server configs. - Started two servers. - Told Phabricator about all four services. - Loaded Chrome and Safari. - Saw them connect to different servers. - Sent messages in one, got notifications in the other (magic!). - Saw the fingerprinting stuff work on the console, no infinite retransmission of messages, etc. (This pretty much just worked when I ran it the first time so I probably missed something?) {F1218835} Reviewers: chad Reviewed By: chad Maniphest Tasks: T6915 Differential Revision: https://secure.phabricator.com/D15711
2016-04-14 17:13:36 +02:00
// Respond to the caller with our fingerprint so it can stop sending
// us traffic we don't need to know about if it's a peer. In particular,
// this stops us from broadcasting messages to ourselves if we appear
// in the cluster list.
var receipt = {
fingerprint: this.getPeerList().getFingerprint()
};
response.writeHead(200, {'Content-Type': 'application/json'});
response.write(JSON.stringify(receipt));
},
_transmitToListeners: function(list, listeners, message) {
for (var ii = 0; ii < listeners.length; ii++) {
var listener = listeners[ii];
if (!listener.isSubscribedToAny(message.subscribers)) {
continue;
}
try {
listener.writeMessage(message);
++this._messagesOut;
this.log(
'<%s> Wrote Message',
listener.getDescription());
} catch (error) {
list.removeListener(listener);
this.log(
'<%s> Write Error: %s',
listener.getDescription(),
error);
}
}
}
}
});