jdev - 2020-08-11


  1. Lance has left
  2. Lance has joined
  3. Beherit has left
  4. Beherit has joined
  5. Wojtek has left
  6. test2 has joined
  7. test2 has left
  8. test2 has joined
  9. moparisthebest has left
  10. kikuchiyo has left
  11. Lance has left
  12. kikuchiyo has joined
  13. moparisthebest has joined
  14. kikuchiyo has left
  15. kikuchiyo has joined
  16. test2 has left
  17. test2 has joined
  18. kikuchiyo has left
  19. adiaholic_ has left
  20. adiaholic_ has joined
  21. kikuchiyo has joined
  22. adrien has left
  23. Yagizа has joined
  24. Yagizа has left
  25. Yagizа has joined
  26. SouL has joined
  27. Vaulor has joined
  28. lovetox has joined
  29. marc0s has left
  30. marc0s has joined
  31. waqas has left
  32. kikuchiyo has left
  33. paul has joined
  34. lovetox has left
  35. lovetox has joined
  36. test2 has left
  37. test2 has joined
  38. marc0s has left
  39. marc0s has joined
  40. adrien has joined
  41. marc0s has left
  42. marc0s has joined
  43. Beherit has left
  44. tsk has left
  45. tsk has joined
  46. goffi has joined
  47. Beherit has joined
  48. Beherit has left
  49. lovetox has left
  50. Beherit has joined
  51. wurstsalat has joined
  52. adiaholic_ has left
  53. adiaholic_ has joined
  54. debacle has joined
  55. kikuchiyo has joined
  56. kikuchiyo has left
  57. kikuchiyo has joined
  58. kikuchiyo has left
  59. Zash has left
  60. paul has left
  61. kikuchiyo has joined
  62. kikuchiyo has left
  63. Zash has joined
  64. kikuchiyo has joined
  65. kikuchiyo has left
  66. kikuchiyo has joined
  67. kikuchiyo has left
  68. kikuchiyo has joined
  69. kikuchiyo has left
  70. kikuchiyo has joined
  71. kikuchiyo has left
  72. kikuchiyo has joined
  73. Beherit has left
  74. kikuchiyo has left
  75. Beherit has joined
  76. kikuchiyo has joined
  77. kikuchiyo has left
  78. kikuchiyo has joined
  79. kikuchiyo has left
  80. kikuchiyo has joined
  81. kikuchiyo has left
  82. kikuchiyo has joined
  83. kikuchiyo has left
  84. kikuchiyo has joined
  85. kikuchiyo has left
  86. kikuchiyo has joined
  87. floretta has left
  88. floretta has joined
  89. goffi has left
  90. Beherit has left
  91. adrien has left
  92. lovetox has joined
  93. Beherit has joined
  94. marc0s has left
  95. pulkomandy has left
  96. pulkomandy has joined
  97. pulkomandy has left
  98. pulkomandy has joined
  99. pulkomandy has left
  100. pulkomandy has joined
  101. pulkomandy has left
  102. pulkomandy has joined
  103. marc0s has joined
  104. pulkomandy has left
  105. pulkomandy has joined
  106. lovetox has left
  107. test2 has left
  108. test2 has joined
  109. Beherit has left
  110. lovetox has joined
  111. esil has left
  112. paul has joined
  113. lovetox has left
  114. lovetox has joined
  115. Beherit has joined
  116. esil has joined
  117. Zash mod_smacks did have such limits for a short time, but they caused exactly this problem and were then removed until someone can come up with a better way to deal with it
  118. Ge0rG was that the thing that made my server explode due to an unbound smacks queue? ;)
  119. Beherit has left
  120. Zash I got the impression that yesterdays discussion was about the opposite problem, killing the session once the queue is too large
  121. Zash So, no, not that issue.
  122. lovetox it turned out the user which reported that problem had a queue size of 1000
  123. Ge0rG what lovetox describes sounds like a case of too much burst traffic, not of socket synchronization issues
  124. lovetox the current ejabberd default is much higher
  125. lovetox but a few versions ago it was 1000
  126. Ge0rG if you join a dozen MUCs at the same time, you might well run into a 1000 stanza limit
  127. lovetox 1000 is like nothing
  128. lovetox you cant even join one irc room like #ubuntu or #python
  129. Ge0rG lovetox: only join MUCs once at a time ;)
  130. lovetox you get instantly disconnected
  131. Ge0rG Matrix HQ
  132. Ge0rG unless the bridge is down ;)
  133. lovetox the current ejabberd default is 5000
  134. lovetox which until now works ok
  135. Ge0rG I'm sure the Matrix HQ has more users. But maybe it's slow enough in pushing them over the bridge that you can fetch them from your server before you are killed
  136. Kev Ge0rG: Oh, you're suggesting that it's a kill based on timing out an ack because the server is ignoring that it's reading stuff from the socket that's acking old stanzas, rather than timing out on data not being received?
  137. Kev That seems plausible.
  138. Ge0rG Kev: I'm not sure it has to do with the server's reading side of the c2s socket at all
  139. Holger <- still trying to parse that sentence :-)
  140. Zash Dunno how ejabberd works but that old mod_smacks version just killed the session once it hit n queued stanzas.
  141. Ge0rG Holger: I'm not sure I underestood it either
  142. adiaholic_ has left
  143. adiaholic_ has joined
  144. Ge0rG Kev: I think it's about N stanzas suddenly arriving for a client, with N being larger than the maximum queue size
  145. Holger Zash: That's how ejabberd works. Yes that's problematic, but doing nothing is even more so, and so far I see no better solution.
  146. Kev Ah. I understand now, yes.
  147. Ge0rG Holger: you could add a time-based component to that, i.e. allow short bursts to exceed the limit
  148. test2 has left
  149. Ge0rG give the client a chance to consume the burst and to ack it
  150. test2 has joined
  151. Holger I mean if you do nothing it's absolutely trivial to OOM-kill the entire server.
  152. Ge0rG Holger: BTDT
  153. Zash Ge0rG: Wasn't that a feedback loop tho?
  154. Ge0rG Zash: yes, but a queue limit would have prevented it
  155. lovetox has left
  156. Kev It's not clear to me how you solve that problem reasonably.
  157. Ge0rG Keep an eye on the largest known MUCs, and make the limit slightly larger than the sum of the top 5 room occupants
  158. Ge0rG And by MUCs, I also mean bridged rooms of any sort
  159. Holger Get rid of stream management :-)
  160. Zash Get rid of the queue
  161. Ge0rG Get rid of clients
  162. Kev I think this is only related to stream management, no? You end up with a queue somewhere?
  163. Zash Yes! Only servers!
  164. jonas’ Zash, peer to peer?
  165. Zash NOOOOOOOO
  166. Ge0rG Holger, Zash: we could implement per-JID s2s backpressure
  167. Zash Well no, bu tyes
  168. Holger Kev: You end up with stored MAM messages.
  169. Ge0rG s2s 0198, but scoped to individual JIDs
  170. Ge0rG also that old revision that allowed to request throttling from the remote end
  171. Zash You could make it so that resumption is not possible if there's more unacked stanzas than a (smaller) queue size
  172. Zash At some point it's going to be just as expensive to start over with a fresh session
  173. Ge0rG Zash: a client that auto-joins a big MUC on connect will surely cope with such invisible limits
  174. Holger Where you obviously might want to implement some sort of disk storage quota, but that's less likely to be too small for clients to cope. Also the burst is often just presence stanzas, which we might be able to reduce/avoid some way.
  175. Zash Soooo, presence based MUC is the problem yet again
  176. Holger Anyway, until you guys fixed all these things for me, I'll want to have a queue size limit :-)
  177. Zash I remember discussing MUC optimizations, like skipping most initial presence for large channels
  178. Ge0rG we need incremental presence updates.
  179. Holger ejabberd's room config has an "omit that presence crap altogether" knob. I think p1 customers usually press that and then things suddenly work.
  180. eta isn't there a XEP for room presence list deltas
  181. eta I also don't enjoy getting megabytes of presence upon joining all the MUCs
  182. Zash eta: Yeah, XEP-0436 MUC presence versioning
  183. Beherit has joined
  184. eta does anyone plan on implementing it?
  185. pulkomandy has left
  186. pulkomandy has joined
  187. Zash I suspect someone is. Not me tho, not right now.
  188. pulkomandy has left
  189. pulkomandy has joined
  190. pulkomandy has left
  191. pulkomandy has joined
  192. Zash Having experimented with presence deduplication, I got the feeling that every single presence stanza is unique, making pretty large
  193. Zash Having experimented with presence deduplication, I got the feeling that every single presence stanza is unique, making deltas* pretty large
  194. Beherit has left
  195. eta oh gods
  196. marc0s has left
  197. marc0s has joined
  198. Zash And given the rate of presence updates in the kind of MUC where you'd want optimizations... not sure how much deltas will help.\
  199. Holger Yeah I was wondering about the effectiveness for large rooms as well.
  200. Zash Just recording every presence update and replaying it like MAM sure won't do. Actual diff will be better, but will it be enough?
  201. Zash Would be nice to have some kind of numbers
  202. Ge0rG So we need to split presence into "room membership updates" and "live user status updates"?
  203. Zash MIX?
  204. drops has left
  205. Zash Affiliation updates and quitjoins is easy enough to separate
  206. Ge0rG and then we end up with matrix-style rooms, and some clients joining and leaving the membership all the time
  207. Zash So we have affiliations, currently present nicknames (ie roles) and presence updates
  208. Beherit has joined
  209. Zash I've been thinking along the lines of that early CSI presence optimizer, where you'd only send presence for "active users" (spoke recently or somesuch). Would be neat to have a summary-ish stanza saying "I just sent you n out of m presences"
  210. pulkomandy has left
  211. pulkomandy has joined
  212. Zash You could also ignore pure presence updates from unaffiliated users and that kind of thing
  213. pulkomandy has left
  214. pulkomandy has joined
  215. Beherit has left
  216. Ge0rG also you only want to know the total number of users and the first page full of them, the other ones aren't displayed anyway ;)
  217. Zash Yeah
  218. test2 has left
  219. Beherit has joined
  220. flow Zash> Soooo, presence based MUC is the problem yet again I think the fundamental design problem is pushing stanzas instead of recipients requesting them. Think for example a participant of a high traffic MUC using a low throughput connection (e.g. GSM). That MUC could easily kill the participants connection
  221. paul has left
  222. serge90 has joined
  223. Zash You do request them by joining.
  224. paul has joined
  225. flow Zash, sure, let me clarify: requesting them on smaller batches (e.g. MAM pagination style)
  226. flow Zash, sure, let me clarify: requesting them in smaller batches (e.g. MAM pagination style)
  227. Zash You just described how Matrix works btw
  228. flow I did not know that, but it appears like one (probably sensible) solution to the flow control / traffic management problem we have
  229. test2 has joined
  230. lovetox has joined
  231. jonas’ or like MIX ;D
  232. Ge0rG let's just do everything in small batches.
  233. flow correct me if I am wrong, but MIX's default modus operandi is still to fan-out all messages
  234. jonas’ I think only if you subscribe to messages
  235. jonas’ also, I thought we were talking about *presence*, not messages.
  236. flow I think the stanza kind does not matter
  237. flow if someone sends you stanzas with a higher rate than you can consume some intermedidate queue will fill
  238. jonas’ yeah, well, that’s true for everything
  239. flow hence I wrote "fundamental design problem"
  240. jonas’ I can see the case for MUC/MIX presence because that’s a massive amplification (you send single presence, you get a gazillion and a continuous stream back)
  241. jonas’ yeah, no, I don’t believe in polling for messages
  242. Kev The main issue is catchup.
  243. jonas’ if you’re into that kind of stuff, use BOSH
  244. flow I did not say anything about polling
  245. Kev Whether when you join you receive a flood of everything, or whether you request stuff when you're ready for it, in batches.
  246. Kev Using MAM on MIX is meant to give you the latter.
  247. flow and yes, the problem is more likely caused by presence stanzas, but could be caused by IQs or messages as well
  248. Kev If you have a room that is itself generating 'live' stanzas at such a rate that it fills queues, that is also a problem, but is distinct from the 'joining lots of MUCs disconnects me' problem.
  249. flow Kev, using the user's MAM service or the MIX channel's MAM service?
  250. Kev Both use the same paging mechanic.
  251. jonas’ 12:41:06 flow1> Zash, sure, let me clarify: requesting them in smaller batches (e.g. MAM pagination style) how is that not polling then?
  252. jonas’ though I sense that this is a discussion about semantics I don’t want to get into right now.
  253. flow right, I wanted to head towards the question on how to be notified that there are new messages that you may want to request
  254. jonas’ by receiving a <message/> with the message.
  255. flow that does not appear to be a solution, as you easily run into the same problem
  256. jonas’ [citation needed]
  257. flow I was thinking more along the lines of infrequent/slightly delayed notifications with the current stanza/message head IDs
  258. Holger MAM/Sub!
  259. flow I was thinking more along the lines of infrequent/slightly delayed notifications with the current stanza/message head ID(s)
  260. flow but then again, it does not appear to be a elegant solution (or potentially is no solution at all)
  261. Beherit has left
  262. Beherit has joined
  263. Beherit has left
  264. Zash Oh, this is basically the same problem as IP congestion, is it not?
  265. Beherit has joined
  266. Zash And the way to solve that is to throw data away. Enjoy telling your users that.
  267. Zash > The main issue is catchup. This. So now you'll have to figure out what data got thrown away and fetch it.
  268. Zash (Also how Matrix works.)
  269. lovetox has left
  270. eta the one thing that may be good to steal from matrix is push rules
  271. eta i.e. some server side filtering you can do to figure out what should generate a push notification
  272. Zash Can you rephrase that in a way that doesn't make me want to say "but they stole this from us"
  273. eta well so CSI filtering is an XMPP technology, right
  274. eta but there's no API to extend it
  275. eta like you can't say "please send me everything matching the regex /e+ta/"
  276. Zash "push rules" meaning what, exactly?
  277. pep. Zash: it's just reusing good ideas :p
  278. Zash You said "push notifications", so I assumed "*mobile* push notifications"
  279. Ge0rG Zash: a filter that the client can define to tell the server what's "important"
  280. Zash AMP?
  281. eta Zash, so yeah, push rules are used for mobile push notifications in Matrix
  282. Zash Push a mod_firewall script? 🙂
  283. Ge0rG for push notifications, the logic is in the push server, which is specific to the client implementation
  284. Zash eta: So you mean user-configurable rules?
  285. eta Zash, yeah
  286. Ge0rG not rather client-configurable?
  287. eta I mean this is ultimately flawed anyway because e2ee is a thing
  288. Zash Everything is moot because E2EE
  289. Ge0rG I'm pretty sure there is no place in matrix where you can enter push rule regexes
  290. pulkomandy Is the problem really to be solved on the client-server link? What about some kind of flow control on the s2s side instead? (no idea about the s2s things in xmpp, so maybe that's not doable)
  291. eta Ge0rG, tada https://matrix.org/docs/spec/client_server/r0.6.1#m-push-rules
  292. Zash Ge0rG: Keywords tho, which might be enough
  293. serge90 has left
  294. eta you can have a "glob-style pattern"
  295. Zash Ugh
  296. Ge0rG eta: that's not what I mean
  297. Ge0rG eta: show me a Riot screenshot where you can define those globs
  298. eta Ge0rG, hmm, can't you put them into the custom keywords field
  299. pulkomandy If you try to solve it on client side you will invent something like tcp windows. Which is indeed a way to solve ip congestion. And doesn't work here because congestion on the server to client socket doesn't propagate to other links
  300. eta doesn't really care about this argument though and is very willing to just concede to Ge0rG :p
  301. Zash What was that thing in XEP-0198 that got removed? Wasn't that rate limiting?
  302. Ge0rG Zash: yes
  303. eta I think the presence-spam-in-large-MUCs issue probably needs some form of lazy loading, right
  304. eta like, send user presence before they talk
  305. eta have an API (probably with RSM?) to fetch all user presences
  306. Zash eta: Yeah, that's what I was thinking
  307. eta the matrix people had pretty much this exact issue and solved it the same way
  308. Zash Oh no, then we need to do it differently!!11!!11!!1 eleven
  309. eta Zash, it's fine, they use {} brackets and we'll use <> ;P
  310. Zash Phew 😃
  311. eta the issue with lots of messages in active MUCs is more interesting though
  312. eta like for me, Conversations chews battery because I'm in like 6-7 really active IRC channels
  313. eta so my phone never sleeps
  314. eta I've been thinking I should do some CSI filtering, but then the issue is you fill up the CSI queue
  315. Zash A thing I've almost started stealing from Matrix is room priorities.
  316. Zash So I have a command where I can mark public channels as low-priority, and then nothing from those gets pushed trough CSI
  317. Ge0rG eta: the challenge here indeed is that all messages will bypass CSI, which is not perfect
  318. eta Zash, yeah, there's that prosody module for that
  319. Ge0rG eta: practically speaking, you might want to have a wordlist that MUC messages must match to be pushed
  320. eta I almost feel like the ideal solution is something more like
  321. eta I want the server to join the MUC for me
  322. eta I don't want my clients to join the MUC (disable autojoin in bookmarks)
  323. eta and if I get mentioned or something, I want the server to somehow forward the mentioned message
  324. Ge0rG eta: your client still needs to get all the MUC data, eventually
  325. eta Ge0rG, sure
  326. eta but, like, I'll get the forwarded message with the highlight
  327. eta then I can click/tap on the MUC to join it
  328. Ge0rG eta: so CSI with what Zash described is actually good
  329. eta and then use MAM to lazy-paginate
  330. eta Ge0rG, yeah, but it fills up in-memory queues serverside
  331. Ge0rG eta: but I think that command is too magic for us mortals
  332. goffi has joined
  333. Ge0rG eta: yes, but a hundred messages isn't much in the grand scheme of things
  334. eta Ge0rG, a hundred is an underestimate ;P
  335. eta some of the IRC channels have like 100 messages in 5 minutes or something crazy
  336. Holger https://jabber.fu-berlin.de/share/holger/EuIflBOiuR0UyOtA/notifications.jpeg
  337. Holger C'mon guys this is trivial to solve.
  338. Ge0rG my prosody is currently consuming ~ 500kB per online user
  339. Holger https://jabber.fu-berlin.de/share/holger/aIlgwvzEMWv66zF9/notifications.jpeg
  340. Holger Oops.
  341. eta Zash, also ideally that prosody module would use bookmarks
  342. eta instead of an ad-hoc command
  343. Ge0rG eta: naah
  344. Zash Bookmarks2 with a priority extension would be cool
  345. Ge0rG we need a per-JID notification preference, like "never" / "always" / "on mention" / "on string match"
  346. Ge0rG which is enforced by the server
  347. eta Ge0rG: that's a different thing though
  348. Ge0rG eta: is it really?
  349. Ge0rG eta: for mobile devices, CSI-passthrough is only relevant for notification causing messages
  350. eta Ge0rG: ...actually, yeah, I agree
  351. Ge0rG you want to get pushed all the messages that will trigger a notification
  352. serge90 has joined
  353. Ge0rG which ironically means that all self-messages get pushed through so that the mobile client can *clear* notifications
  354. Ge0rG which ironically also pushes outgoing Receipts
  355. Ge0rG eta: I'm sure I've written a novel or two on standards regarding that
  356. Ge0rG or maybe just in the prosody issue tracker
  357. Ge0rG eta: also CSI is currently in Last Call, so feel free to add your two cents
  358. Zash Ironically?
  359. Ge0rG isn't going to re-post his "What's Wrong with XMPP" slide deck again
  360. Ge0rG Also the topic of notification is just a TODO there.
  361. Zash Heh
  362. lovetox has joined
  363. Zash > you want to get pushed all the messages that will trigger a notification and that's roughly the same set that you want archived and carbon'd, I think, but not exactly
  364. eta Ge0rG: wait that sounds like an interesting slide deck
  365. eta Zash: wild idea, just maintain a MAM archive for "notifications"
  366. eta I guess a pubsub node would also work
  367. eta and you shove all said "interesting" messages in there
  368. Ge0rG eta: https://op-co.de/tmp/whats-wrong-with-xmpp-2017.pdf
  369. Zash eta: MAM for the entire stream?
  370. Zash Wait, what's "notifications" here?
  371. Zash Stuff that causes the CSI queue to get flushed? Most of that'll be in MAM already.
  372. eta Zash: well mentions really
  373. Ge0rG eta: MAM doesn't give you push though
  374. eta Ge0rG: okay, after reading those slides I'd say that's a pretty good summary and proposal
  375. adiaholic_ has left
  376. adiaholic_ has joined
  377. SouL has left
  378. SouL has joined
  379. kikuchiyo has left
  380. Ge0rG eta: all it needs is somebody to implement all the moving parts
  381. esil has left
  382. SouL has left
  383. SouL has joined
  384. Zash Break it into smaller (no, even smaller!) pieces and file bug reports?
  385. Zash /correct feature requests*
  386. Wojtek has joined
  387. Ge0rG when I break it into this small pieces, the context gets lost
  388. Ge0rG like just now I realized there might be some smarter way to handle "sent" carbons in CSI, than just passing all through
  389. Zash One huge "do all these things" isn't great either
  390. Ge0rG but maybe a sent carbon of a Receipt isn't too bad after all because it most often comes short after the original message that also pierced CSI?
  391. Ge0rG did I mention that I'm collecting large amounts of data on the number and reason of CSI wakeups?
  392. Zash Possibly
  393. Ge0rG and that the #1 reason used to be disco#info requests to the client?
  394. Zash Possibly (re carbon-receipts)
  395. Zash Did I mention that I too collected stats on that, until I discovered that storing stats murdered my server?
  396. Ge0rG I'm only "storing" them in prosody.log, and that expires after 14d
  397. Ge0rG but maybe somebody wants to bring them to some use?
  398. Zash disco#info cache helped a *lot* IIRC
  399. Zash I also found that a silly amount of wakeups were due to my own messages on another device, after which I wrote a grace period thing for that.
  400. Zash IIRC before I got rid of stats collection it was mostly client-initiated wakeups that triggered CSI flushes
  401. Ge0rG Zash: "own messages on other device" needs some kind of logic maybe
  402. Ge0rG like: remember the last message direction per JID, only wake up on outgoing read-marker / body when direction changes?
  403. Zash Ge0rG: Consider me, writing here, right now, on my work station. Groupchat messages sent to my phone.
  404. Ge0rG just waking up on outgoing read-marker / body would be a huge improvement already
  405. Ge0rG Zash: yes, that groupchat message is supposed to clear an eventual notification for the groupchat
  406. Ge0rG that = your
  407. kabaka has joined
  408. Zash After the grace period ends, if there were anything high-priority since the last activity from that other client, then it should push.
  409. Zash Not done that yet tho I thkn
  410. Zash But as long as I'm active at another device, pushing to the phone is of no use
  411. kabaka has left
  412. Zash Tricky to handle the case of an incoming message just after typing "brb" and grabbing the phone to leave
  413. Zash Especially with a per-stanza yes/no/maybe function, it'll need a "maybe later" response
  414. ralphm has left
  415. pulkomandy has left
  416. pulkomandy has joined
  417. pulkomandy has left
  418. pulkomandy has joined
  419. Ge0rG Zash: yeah. Everything is HARD
  420. paul has left
  421. waqas has joined
  422. eta also for all slack's complicated diagrams their notifications don't even work properly either
  423. eta like it doesn't dismiss them on my phone, etc
  424. ralphm has joined
  425. pulkomandy has left
  426. pulkomandy has joined
  427. pulkomandy has left
  428. pulkomandy has joined
  429. pulkomandy has left
  430. pulkomandy has joined
  431. pulkomandy has left
  432. pulkomandy has joined
  433. test2 has left
  434. test2 has joined
  435. adrien has joined
  436. debacle has left
  437. adrien has left
  438. adrien has joined
  439. adrien has left
  440. adrien has joined
  441. adrien has left
  442. adrien has joined
  443. lovetox has left
  444. paul has joined
  445. Lance has joined
  446. Seb has joined
  447. Seb has left
  448. lovetox has joined
  449. test2 has left
  450. serge90 has left
  451. marc0s has left
  452. marc0s has joined
  453. debacle has joined
  454. adiaholic_ has left
  455. adiaholic_ has joined
  456. sonny has left
  457. sonny has joined
  458. sonny has left
  459. sonny has joined
  460. eta has left
  461. eta has joined
  462. waqas has left
  463. Yagizа has left
  464. flow Zash> And the way to solve that is to throw data away. Enjoy telling your users that. I'd say that's where there is TCP on top of IP (where I'd argue, the actual congestion and traffic flow control happens)
  465. Lance has left
  466. Zash flow: With TCP, same as XMPP, you just end up filling up buffers and getting OOM'd
  467. flow Zash, I don't think those two are realy comperable: with tcp you have exactly two endpoints, with xmpp one entity communicates potentially with multiple endpoints (potentially over multiple different s2s links)
  468. flow Zash, I don't think those two are realy comparable: with tcp you have exactly two endpoints, with xmpp one entity communicates potentially with multiple endpoints (potentially over multiple different s2s links)
  469. flow Zash, I don't think those two are really comparable: with tcp you have exactly two endpoints, with xmpp one entity communicates potentially with multiple endpoints (potentially over multiple different s2s links)
  470. Zash (me says nothing about mptcp)
  471. Zash So what Ge0rG said about slowing down s2s links?
  472. flow I did not read the full backlog, could to summarize what Ge0rG said?
  473. flow (otherwise I have to read it first)
  474. Zash 13:31:21 Ge0rG "Holger, Zash: we could implement per-JID s2s backpressure"
  475. flow besides, arent in MPTCP still only two endpoints involved (but using potentially multiple paths)?
  476. flow besides, aren't in MPTCP still only two endpoints involved (but using potentially multiple paths)?
  477. flow I am not sure if that is technically possible, the "per-JID" part here alone could be tricky
  478. flow it appears that implementing backpressure would likely involve signalling back to the sender, but what if the path the sender is also congested?
  479. Zash I'm not sure this is even doable without affecting other users of that s2s link
  480. flow as of now, the only potential solution I could come up with is keeping the state server side, and have servers notify clients that the state changes, so that clients can sync whenever they want, and especially how fast they want
  481. flow but that does not solve the problem for servers with poor connectivity
  482. jonas’ let’s change xmpp-s2s to websockets / http/3 or whatever which supports multiple streams and will of course solve the scheduling issue of streams competing for resources and not at all draw several CVE numbers in that process :)
  483. Zash Not impossible to open more parallell s2s links...
  484. jonas’ one for each JID? :)
  485. jonas’ one for each local JID? :)
  486. Zash Heh, you could open a secondary one for big bursts of stanzas like MUC joins and MAM ....
  487. Zash Like I think there were thoughts in the past about using a secondary client connection for vcards
  488. jonas’ haha wat
  489. Beherit has left
  490. Beherit has joined
  491. Lance has joined
  492. Zash Open 2 c2s connections. Use one as normal, presence, chat etc there. except send some requests like for vcards over the other one, since they often contain big binary blobs that then wouldn't block the main connection :)
  493. Lance has left
  494. Lance has joined
  495. goffi has left
  496. Beherit has left
  497. Beherit has joined
  498. sonny has left
  499. sonny has joined
  500. lovetox has left
  501. Beherit has left
  502. Beherit has joined
  503. sonny has left
  504. sonny has joined
  505. debacle has left
  506. debacle has joined
  507. xecks has left
  508. test2 has joined
  509. test2 has left
  510. test2 has joined
  511. debacle has left
  512. pulkomandy Well… at this point you may start thinking about removing tcp (its flow control doesn't work in this case anyway) and do something xml over udp instead?
  513. test2 has left
  514. test2 has joined
  515. test2 has left
  516. Zash At some point it stopped being XMPP
  517. eta has left
  518. eta has joined
  519. moparisthebest QUIC solves this
  520. Wojtek has left
  521. SouL has left
  522. test2 has joined
  523. test2 has left
  524. test2 has joined
  525. test2 has left
  526. test2 has joined
  527. waqas has joined