jdev - 2020-04-17


  1. moparisthebest has left
  2. DebXWoody has joined
  3. SouL has joined
  4. goffi has joined
  5. goffi has left
  6. goffi has joined
  7. paul has joined
  8. goffi has left
  9. paul has left
  10. Meta Bergman has joined
  11. wurstsalat has joined
  12. paul has joined
  13. serge90 has joined
  14. serge90 has left
  15. serge90 has joined
  16. kikuchiyo has left
  17. kikuchiyo has joined
  18. jonnj has joined
  19. lovetox has joined
  20. serge90 has left
  21. serge90 has joined
  22. serge90 has left
  23. serge90 has joined
  24. serge90 has left
  25. serge90 has joined
  26. pulkomandy has left
  27. jonnj has left
  28. pulkomandy has joined
  29. SouL has left
  30. SouL has joined
  31. rion has left
  32. rion has joined
  33. kikuchiyo has left
  34. kikuchiyo has joined
  35. serge90 has left
  36. serge90 has joined
  37. adrien has left
  38. serge90 has left
  39. serge90 has joined
  40. pulkomandy has left
  41. pulkomandy has joined
  42. serge90 has left
  43. serge90 has joined
  44. adrien has joined
  45. serge90 has left
  46. serge90 has joined
  47. serge90 has left
  48. serge90 has joined
  49. serge90 has left
  50. serge90 has joined
  51. Jae has joined
  52. serge90 has left
  53. serge90 has joined
  54. serge90 has left
  55. serge90 has joined
  56. lovetox has left
  57. serge90 has left
  58. serge90 has joined
  59. asterix has joined
  60. adrien has left
  61. adrien has joined
  62. serge90 has left
  63. serge90 has joined
  64. lovetox has joined
  65. serge90 has left
  66. lovetox hm lib dev question
  67. lovetox if a lib has a JID object, and has a getBare() method
  68. serge90 has joined
  69. lovetox what should it return if the jid is only a domain for example "asd.com"
  70. lovetox would be wrong to return asd.com
  71. lovetox should it raise an exception, NoBareJID or something?
  72. Link Mauve lovetox, why would it be wrong? asd.com is a valid bare JID.
  73. lovetox or is the localpart not mandatory on a bare jid
  74. lovetox ah ok nice
  75. Link Mauve asd.com/foo is a valid full JID.
  76. lovetox so barejid, is just without resource
  77. Link Mauve Yes.
  78. lovetox ah k thanks
  79. larma has left
  80. adrien has left
  81. goffi has joined
  82. serge90 has left
  83. serge90 has joined
  84. alexis has left
  85. alexis has joined
  86. larma has joined
  87. pulkomandy has left
  88. serge90 has left
  89. serge90 has joined
  90. pulkomandy has joined
  91. lovetox has left
  92. serge90 has left
  93. serge90 has joined
  94. Jae has left
  95. Jae has joined
  96. serge90 has left
  97. serge90 has joined
  98. serge90 has left
  99. serge90 has joined
  100. Marc has joined
  101. serge90 has left
  102. serge90 has joined
  103. goffi has left
  104. goffi has joined
  105. asterix has left
  106. asterix has joined
  107. serge90 has left
  108. serge90 has joined
  109. lovetox has joined
  110. pulkomandy has left
  111. serge90 has left
  112. serge90 has joined
  113. jcbrand has joined
  114. pulkomandy has joined
  115. serge90 has left
  116. serge90 has joined
  117. pulkomandy has left
  118. serge90 has left
  119. serge90 has joined
  120. pulkomandy has joined
  121. serge90 has left
  122. serge90 has joined
  123. jonas’ that ^
  124. serge90 has left
  125. serge90 has joined
  126. serge90 has left
  127. serge90 has joined
  128. adrien has joined
  129. serge90 has left
  130. serge90 has joined
  131. pulkomandy has left
  132. pulkomandy has joined
  133. kikuchiyo has left
  134. kikuchiyo has joined
  135. serge90 has left
  136. serge90 has joined
  137. flow lovetox, http://jxmpp.org/releases/0.7.0-alpha5/javadoc/org/jxmpp/jid/Jid.html https://github.com/igniterealtime/jxmpp#jxmpp-jid
  138. serge90 has left
  139. serge90 has joined
  140. adrien has left
  141. serge90 has left
  142. serge90 has joined
  143. serge90 has left
  144. serge90 has joined
  145. alexis has left
  146. serge90 has left
  147. serge90 has joined
  148. pulkomandy has left
  149. pulkomandy has joined
  150. serge90 has left
  151. serge90 has joined
  152. alexis has joined
  153. serge90 has left
  154. serge90 has joined
  155. serge90 has left
  156. serge90 has joined
  157. Jae has left
  158. asterix has left
  159. asterix has joined
  160. Jae has joined
  161. serge90 has left
  162. serge90 has joined
  163. pulkomandy has left
  164. kikuchiyo has left
  165. serge90 has left
  166. serge90 has joined
  167. kikuchiyo has joined
  168. jae has joined
  169. pulkomandy has joined
  170. serge90 has left
  171. asterix has left
  172. serge90 has joined
  173. asterix has joined
  174. kikuchiyo has left
  175. serge90 has left
  176. serge90 has joined
  177. serge90 has left
  178. serge90 has joined
  179. kikuchiyo has joined
  180. serge90 has left
  181. serge90 has joined
  182. Alex has left
  183. Alex has joined
  184. serge90 has left
  185. serge90 has joined
  186. serge90 has left
  187. serge90 has joined
  188. jcbrand has left
  189. serge90 has left
  190. serge90 has joined
  191. jcbrand has joined
  192. serge90 has left
  193. serge90 has joined
  194. serge90 has left
  195. serge90 has joined
  196. lovetox has left
  197. serge90 has left
  198. serge90 has joined
  199. asterix has left
  200. asterix has joined
  201. adrien has joined
  202. adrien has left
  203. adrien has joined
  204. kikuchiyo has left
  205. kikuchiyo has joined
  206. serge90 has left
  207. serge90 has joined
  208. Jae has left
  209. serge90 has left
  210. serge90 has joined
  211. Jae has joined
  212. asterix has left
  213. asterix has joined
  214. asterix has left
  215. asterix has joined
  216. serge90 has left
  217. serge90 has joined
  218. serge90 has left
  219. serge90 has joined
  220. pulkomandy has left
  221. serge90 has left
  222. serge90 has joined
  223. pulkomandy has joined
  224. serge90 has left
  225. serge90 has joined
  226. Jae has left
  227. asterix has left
  228. asterix has joined
  229. asterix has left
  230. asterix has joined
  231. asterix has left
  232. asterix has joined
  233. serge90 has left
  234. serge90 has joined
  235. serge90 has left
  236. serge90 has joined
  237. serge90 has left
  238. serge90 has joined
  239. serge90 has left
  240. serge90 has joined
  241. pulkomandy has left
  242. pulkomandy has joined
  243. asterix has left
  244. asterix has joined
  245. serge90 has left
  246. serge90 has joined
  247. lovetox has joined
  248. serge90 has left
  249. serge90 has joined
  250. asterix has left
  251. serge90 has left
  252. serge90 has joined
  253. asterix has joined
  254. serge90 has left
  255. serge90 has joined
  256. rion has left
  257. Jae has joined
  258. serge90 has left
  259. serge90 has joined
  260. rion has joined
  261. serge90 has left
  262. serge90 has joined
  263. jcbrand has left
  264. serge90 has left
  265. serge90 has joined
  266. asterix has left
  267. asterix has joined
  268. adrien has left
  269. serge90 has left
  270. serge90 has joined
  271. serge90 has left
  272. serge90 has joined
  273. Jae has left
  274. pulkomandy has left
  275. pulkomandy has joined
  276. serge90 has left
  277. serge90 has joined
  278. serge90 has left
  279. serge90 has joined
  280. jcbrand has joined
  281. serge90 has left
  282. serge90 has joined
  283. pulkomandy has left
  284. serge90 has left
  285. serge90 has joined
  286. adrien has joined
  287. pulkomandy has joined
  288. serge90 has left
  289. serge90 has joined
  290. asterix has left
  291. asterix has joined
  292. serge90 has left
  293. serge90 has joined
  294. pulkomandy has left
  295. pulkomandy has joined
  296. Jae has joined
  297. serge90 has left
  298. serge90 has joined
  299. serge90 has left
  300. serge90 has joined
  301. lovetox how likely is it that 2 different hash mechanisms produce the same hash
  302. lovetox like entity caps its mostly sha-1
  303. lovetox but theoretically you can also use something else
  304. lovetox right now i store the hash mechanism beside the hash
  305. Link Mauve lovetox, XEP-0390 fixed this issue AFAIK.
  306. lovetox but i wonder if i can just not store the hash mech
  307. jonas’ lovetox, it is unlikely. but it’s stupid to not store the mechanism, too.
  308. jonas’ it only costs a few bytes
  309. serge90 has left
  310. Link Mauve Can get down to a single byte if you convert the string into an enum.
  311. serge90 has joined
  312. Link Mauve Assuming you know the mechanisms you can use.
  313. jonas’ there was a thing...
  314. lovetox its not about storage cost
  315. lovetox i have a 115 cache
  316. lovetox which constists of hashmethod/hash -> discoinfo
  317. lovetox but then there are entitys we have to query and cache which provide no hash or have no presence at all
  318. lovetox like for example a muc
  319. lovetox i need to also store these disco info data
  320. lovetox but its hard to use the same cache
  321. lovetox because for muc i need a JID -> DiscoInfo kind of cache
  322. lovetox and that i need 2 different caches for discoinfo data .. somehow i dont like it
  323. flow but that's how it is
  324. jonas’ lovetox, in aioxmpp, we listen on presence info and prime the JID -> DiscoInfo cache from the hash -> discoinfo cache
  325. jonas’ that means that disco
  326. flow or, are you thinking about a generic String→ disco#info cache?
  327. jonas’ that means that discoinfo lookups themselves only ever use the JID -> DiscoInfo cache
  328. flow guess one could do that, but I'd personally wouldn't do so
  329. jonas’ which is pre-populated on the fly by the entitycaps component which listens for presence
  330. flow especially since you could persist the caps cache to disk, while the jid→disco#info cache is somewhat unreliable and should have a sensible timeout (smack uses 24h)
  331. lovetox ah nice so when you need caps, you always access only the JID -> Disco info cache
  332. lovetox thats exactly what i was searching for
  333. jonas’ lovetox, correct
  334. lovetox thanks
  335. flow jonas’, does that jid→disco#info cache only include values obtained via caps?
  336. serge90 has left
  337. jonas’ flow, no
  338. flow or do you also put in results of actual disco#info requests?
  339. serge90 has joined
  340. jonas’ in fact, the jid->disco#info cache is in reality a JID -> Future(disco#info) cache
  341. lovetox yes thats the goal flow, often i have to disco info instances who dont have presence
  342. lovetox and i save that in a cache and store to disk
  343. asterix has left
  344. asterix has joined
  345. asterix has left
  346. asterix has joined
  347. jonas’ lovetox, I’m not sure storing the jid->disco#info cache on disk is a good idea
  348. flow store to disk ephemeral data?
  349. lovetox for example a transport which im not subscribed to, if i load my roster i want to know the identity type so i can display icons that match
  350. jonas’ that ^
  351. jonas’ right, but treat it as stale and look it up at some point if you’ve loaded it from disk
  352. lovetox of course, you have have a last seen attr and perodically request again
  353. jonas’ :+1:
  354. lovetox all disco info is stale the second you received it, so this has nothing to do with application restarts
  355. jonas’ sure, but the longer you wait, the staler it gets :)
  356. flow plus with caps+presence you will get push notifications if disco#info changes
  357. jonas’ flow, not for MUC services for example
  358. flow and can then react on that and invalide your cache etc
  359. jonas’ which is what he’s talking about
  360. lovetox flow we are talking specially about entitys that have no presence
  361. flow I know. I just wanted to state that there is a difference between disco#info data with caps and that without
  362. flow for those cases like MUC, smack has a cache which expires its entries after 24h
  363. flow but actually, I am always pondering with that
  364. serge90 has left
  365. serge90 has joined
  366. flow assume we one day live in a word where the xmpp service address will host most services (which I think is desireable for the services where it is possible). and your service operator updates the service, then you will potentially only become aware of that new feature after 24h
  367. flow stream:features to the rescue
  368. Jae has left
  369. jonas’ it’d be nice to have 115/390-like push for services, too
  370. flow wouldn't that be the case if you are subscribed to the presence?
  371. lovetox for muc this somewhat exists
  372. flow they just have to add xep115/390 metadata
  373. lovetox its called notification of configuration change
  374. lovetox i always disco after it
  375. lovetox because the notification does not tell you what changed
  376. jonas’ flow, you’re assuming that all services expose presence
  377. flow so it is probably not a issue of a spec filling a hole, but implementations just doing that
  378. flow jonas’, well services need to know the subscriped entities to push 115/390 data to
  379. jonas’ that is true
  380. jonas’ I question whether that needs to be presence though
  381. flow and I think there is nothing wrong with just re-using presence for that?
  382. flow I can see the argumented of a polluted roster
  383. jonas’ I think there’s some reason in not using presence for this at all
  384. flow but I wouldn't buy it
  385. jonas’ that, too
  386. jonas’ not for service-to-client, but for client-to-client presence, it quickly gets expensive to have all that stuff in the presence stanza
  387. Jae has joined
  388. jonas’ so I wonder whether more fine-grained notification models wouldn’t make more sense
  389. pulkomandy has left
  390. flow hmm I'm sorry I can't follow, we where talking about services using caps to "push" diso#info to interested parties, and now you switch to c2s presences?
  391. jonas’ flow, yes
  392. jonas’ I’m questioning using presence for caps
  393. jonas’ because presence is overloaded
  394. jonas’ this is mostly relevant for c2s presences
  395. jonas’ (or, more specifically, for client-to-client presences)
  396. flow so you basically want to re-open the old discussion between peter and phippo about presence floods? ;)
  397. jonas’ maybe :)
  398. flow I mean clients do not change their disco#info feature often, do they?
  399. jonas’ exactly
  400. jonas’ that’s why *not* having this in presence would be good
  401. flow I mean clients do not change their disco#info often, do they?
  402. flow please elaborate, because I think having a client specific caps in presence is potentially the only thing sensible in presence these days
  403. jonas’ avatar hashes exist too
  404. jonas’ GPG also
  405. flow well those should probably go in PEP
  406. jonas’ exactly
  407. flow and openpgp is already in PEP
  408. jonas’ I still see it in presence stanzas
  409. flow (if you use a modern XEP ;;)
  410. jonas’ just like avatar hashes
  411. jonas’ yeah, well
  412. jonas’ the question is, if we’ve gone through the effort to move this rarely-changing data out of <presence/>, shouldn’t we also move that other rarely-changing data (ecaps) out of presence?
  413. serge90 has left
  414. jonas’ what is the rationale for keeping it there?
  415. flow ok, so from a protocl PoV we are fine (at least in these areas), seems to be more of an implementation-is-missing issue
  416. serge90 has joined
  417. flow jonas’, ahh, I was no thinking that the frequency of change should be a criterion here
  418. flow I was more thinking of "is this client specific" as criterion
  419. jonas’ aha
  420. flow I don't think you want different avatars for different clients
  421. jonas’ I was coming from the "rarely-changing data in a stanza which is often sent is a waste of bandwidth" angle
  422. flow (course, if you ask enough people, some people will say they want this…)
  423. lovetox has left
  424. jonas’ yeah, the per-client-ness is an argument pro presence
  425. jonas’ though we’ve already had enough arguments for the case that per-client caps are rarely useful and most of the time you’ll need something like an aggregated caps over all clients of the user (both min and max aggregates)
  426. jonas’ and those caps could be distributed by the server in a non-presence vehicle and also contain full caps hashes for the individual clients which are currently online.
  427. flow yes, but, even if per-client caps are rarely useful, which I do not know if I would aggree, I do not see this as argument to remove them
  428. flow of course, what we have discussed regarding per-account caps appears still desireable
  429. flow and we should move towards that
  430. jonas’ even if per-client caps are useful (which they sometimes are, I agree), the question is whether they belong in presence
  431. flow and maybe, just maybe, we will discovered that per client caps are no longer needed, but then they will probably vanish automatically
  432. flow yes, but I am not sure if this is the question we should answer right now
  433. pulkomandy has joined
  434. flow it appears as something deeply baked into the core of xmpp
  435. jonas’ not really, it’s in '115
  436. jonas’ that’s not that deep
  437. flow and since they rarely change, i feel like it is not worth any effort getting rid of them
  438. flow but if you want to work on a spec which puts those into something else PEP, then please do
  439. flow but if you want to work on a spec which puts those into something else (PEP?), then please do
  440. jonas’ no, that they rarely change is a reason to move them out of presence
  441. jonas’ if they changed "sometimes", presence would be a good place. if they changed "often", presence would be a terrible placce.
  442. serge90 has left
  443. jonas’ if they change "rarely", they are dead weight in most of the presence broadcasts which happen
  444. serge90 has joined
  445. jonas’ (of course, if they change "often", they cause unnecessary presence broadcasts, which is arguably much worse)
  446. jonas’ flow, yeah, I’ve been pondering that for '390, which is why I take the opportunity to discuss this
  447. flow ahh you not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence
  448. flow ahh you are not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence
  449. flow ahh you are not worried about caps triggering additional presence broadcasts, but the mere existence of caps in every presence
  450. jonas’ exactly
  451. flow tbh i never thought of this is something of a heavy burder
  452. flow tbh i never thought of this is something of a heavy burden
  453. jonas’ it gets heavier when you introduce hash agility (like '390) and modern hash functions
  454. flow I personally wouldn't invest time to improve here
  455. jonas’ stuff gets more and longer
  456. flow true, but I do not think that we will change hashes often
  457. jonas’ I’m not so sure of that
  458. flow sha1 has served us well for what? a decade or so?
  459. jonas’ and even if we don’t change them *often*, the transition period may well be a decade of sending two hashes with each presence
  460. jonas’ because oldstable
  461. flow obviously
  462. flow but if I had to guess i'd say we see 4-5 caps variants per presence at most
  463. jonas’ which is quite a lot
  464. flow which surely is not optimal, but something I could sleep with
  465. flow jonas’, if you really want to reduce wire bytes invent an XML extension where the end element is always empty ;)
  466. jonas’ ITYM EXI
  467. Syndace alright this is getting WAY to close to what we discussed for OMEMO just minutes ago, I have to jump in with something slightly off-topic
  468. serge90 has left
  469. serge90 has joined
  470. Syndace We have the problem that for OMEMO, you subscribe to the PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP nodes on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀
  471. Syndace We have the problem that for OMEMO, you subscribe to a PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP updates on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀
  472. jonas’ Syndace, EXI needs to be done on the stream level
  473. flow Syndace, a common pattern is to split PEP data into a hash and the actual data
  474. jonas’ it would be cool if PEP services could do that
  475. jonas’ that would solve the race condition issues around that
  476. Syndace Nah, EXI is just compression for XML, not talking about the EXI XEP but the EXI technology
  477. flow that way you only get the hashes on connect, which may already helps a lit
  478. jonas’ Syndace, EXI generates binary output though, I’m not sure you lose 99% of the advantages if you have to wrap it in base64 again.
  479. Syndace Yeah that would actually help a lot
  480. flow not sure if this is possible in your case though, would need to have a closer look
  481. jonas’ and if every client comes with an EXI implementation, we’re half way to being able to use EXI on c2s, which would be amazing
  482. Syndace Did a bit of research on available EXI implementations. It doesn't look super good but there are open implementations for C, Java, JavaScript and C#, though I can't say anything about the quality of those. Seem maintained at least.
  483. jonas’ I hadn’t checked that far yet
  484. jonas’ I only checked if libxml supports it (which it doesn’t)
  485. flow as much as like EXI, I am sceptic if this is the right solution to your problem
  486. Syndace flow I think that should be possible for OMEMO device lists, I'll mention it as a possible solution, thanks 🙂
  487. Syndace EXI could be used to compress bodies in general too, not only for the PEP node content
  488. jonas’ we should really discuss letting PEP services hash the contents of nodes
  489. Syndace And we encrypt the bodies as binary data anyway (SCE), so we don't have to base64 stuff there
  490. jonas’ but that’d require c14n, and nobody wants to go near that in XML :)
  491. flow Syndace, thay them the OpenPGP XEP said hello (that is where the idea came from)
  492. jonas’ Syndace, yes
  493. flow Syndace, say them the OpenPGP XEP said hello (that is where the idea came from)
  494. serge90 has left
  495. serge90 has joined
  496. Syndace ...should really read the openpgp xep again 😀
  497. flow jonas’> we should really discuss letting PEP services hash the contents of nodes +1
  498. Syndace yeah that would be awesome
  499. flow a generic optional mechanism where the push only contains "a hash" would be nice
  500. flow could potentially be as easy as s/+notify/+notify-hash/
  501. flow great, now I wanna write a new protoxep
  502. flow when I actually wanted to go into the hammock
  503. Martin You can't do both? Write in the hammock?
  504. jonas’ flow, go ahead
  505. serge90 has left
  506. serge90 has joined
  507. Syndace > could potentially be as easy as s/+notify/+notify-hash/ damn that sounds soo good
  508. jonas’ +notify-hash-hash-hash-hash-hash-ha…
  509. serge90 has left
  510. serge90 has joined
  511. Jae has left
  512. serge90 has left
  513. serge90 has joined
  514. Jae has joined
  515. serge90 has left
  516. serge90 has joined
  517. serge90 has left
  518. serge90 has joined
  519. Jae has left
  520. Jae has joined
  521. serge90 has left
  522. serge90 has joined
  523. serge90 has left
  524. serge90 has joined
  525. pulkomandy has left
  526. pulkomandy has joined
  527. serge90 has left
  528. serge90 has joined
  529. larma flow, why not versioning instead of hash
  530. larma after all, a few hundred hash nodes that are the same as last connect also seeems kind of wasted...
  531. jonas’ larma, what would the advantage be?
  532. serge90 has left
  533. serge90 has joined
  534. larma jonas’, well if I have >100 contacts, each of them use(d) omemo,avatar,microblog,... and when connecting I +notify-hash, I still receive a few hundred hashes. And often enough those are just the same hash as last time I connected.
  535. larma With some versioning scheme it could be done such that I only get the changes since last connect
  536. jonas’ larma, right
  537. jonas’ I thought versioning per node
  538. jonas’ where you wouldn’t win anything over hashes
  539. jonas’ versioning per contact or globally (by your local server) would win of course
  540. larma true, yeah was considering global versioning
  541. Syndace +notify-hash-$LASTHASH
  542. Syndace 😀
  543. Zash Uh, what have I missed here‽
  544. jonas’ Syndace, you do realize that that immediately causes a loop? :)
  545. jonas’ ah, no, not a loop
  546. jonas’ but still terrible things™
  547. Syndace no I don't actually?
  548. alexis has left
  549. larma Syndace, that wouldn't work because what would you hash, after all you receive multiple different nodes based on that notify
  550. jonas’ Syndace, it’s part of your ecaps hash ;)
  551. jonas’ it doesn’t cause a loop -- that was a mistake on my side -- but it still does fun things. since everytime you receive a pep update, your ecaps hash would change and you’d have to reemit presence
  552. Syndace oh right, you +notify for the node and not for each contact you want notifications from
  553. serge90 has left
  554. serge90 has joined
  555. Syndace jonas' I wasn't thinking about updating the hash during runtime, just setting it to the last hash you saw before disconnecting last time. Only to avoid the one single automatic update that you receive on connect.
  556. serge90 has left
  557. serge90 has joined
  558. asterix has left
  559. asterix has joined
  560. Jae has left
  561. pulkomandy has left
  562. serge90 has left
  563. serge90 has joined
  564. Jae has joined
  565. pulkomandy has joined
  566. serge90 has left
  567. serge90 has joined
  568. alexis has joined
  569. serge90 has left
  570. serge90 has joined
  571. pulkomandy has left
  572. pulkomandy has joined
  573. serge90 has left
  574. serge90 has joined
  575. Wojtek has joined
  576. Wojtek has left
  577. serge90 has left
  578. serge90 has joined
  579. moparisthebest has joined
  580. Wojtek has joined
  581. lovetox has joined
  582. serge90 has left
  583. serge90 has joined
  584. lovetox hm are you aware that +notify
  585. lovetox hm are you aware that +notify-hash would change your disco info
  586. lovetox just saying that would flood me with disco info requests everytime something changes in pep
  587. lovetox and this brings me to the topic that +notify is really bad in disco info
  588. lovetox i cant deactivate a feature without changing my disco info
  589. serge90 has left
  590. serge90 has joined
  591. lovetox but on the other hand its really the most easy way to communicate to remote servers what you want hmm
  592. Jae has left
  593. Jae has joined
  594. Syndace "hash" means the word "hash" here, not any actual value. so you'd put "+notify-hash" in disco info exactly the same way you put "+notify" there now. so no disco info change everytime something changes in pep.
  595. lovetox then i missed something
  596. lovetox how does the server know on what version i am?
  597. Syndace I don't think there even was consensus on doing versioning
  598. Syndace just a simple hash of the node content
  599. Syndace nothing more, nothing less
  600. lovetox where is the hash of the node contetn
  601. lovetox sent with the pep notification?
  602. Zash hash of what?
  603. Syndace hash of the pep node content
  604. Zash what normalization?
  605. Zash what c14n?
  606. Syndace whatever flow thinks of 😀
  607. Syndace > sent with the pep notification? yeah. instead of getting the content in the notification, you get a hash of the content.
  608. lovetox so instead of the actual data i get hundreds of pep notifications that contain a hash
  609. Syndace reduced bandwidth and if you need to know the actual content you can manually query
  610. lovetox and i have to query more if its not the hash that i have?
  611. Syndace yup, instead of 100 device lists, 100 hashes
  612. Zash Isn't something like this in XEP-0060?
  613. lovetox yes thats already in their
  614. Syndace (for example)
  615. Zash notifications without payloads at least
  616. lovetox its called omit the payload
  617. Syndace how does that work with the first update you get when you connect to the server and +notify?
  618. lovetox you get a notification just with the item-id without payload
  619. lovetox the item-id could be your version or hash
  620. Zash cf xep84
  621. lovetox but really thats not worth it for something like a device list on omemo
  622. Zash I've actually wondered why '84 doesn't just use payloadless notifications instead of a separate node
  623. lovetox the payload is small anyway
  624. Syndace lovetox well, the node can contain user-defined labels now
  625. Syndace so it can be a few times bigger than in legacy omemo
  626. Syndace for a few 100 contacts that adds up
  627. Syndace at least larma said that the device list notifications already make up a considerable portion of the traffic on connect
  628. larma It's probably not the only thing, but it's definitely visible
  629. larma haven't actually calculated how much bytes it makes
  630. lovetox you can reduce the payload
  631. serge90 has left
  632. lovetox but this does not change the fact that pubsub in its current form is just not scaleable
  633. serge90 has joined
  634. lovetox it works nice until you reach X users in your roster
  635. lovetox then it becomes a burden
  636. larma you mean pep, not pubsub
  637. lovetox pep is a subset of pubsub
  638. Syndace payloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though, like a hash of the content
  639. Syndace we use "current" at the moment
  640. lovetox then you need to configure the node to max-items=1
  641. Syndace and do payloadless notifications work with PEP notifications?
  642. lovetox which we sidestep with "current" right now
  643. lovetox Syndace, its a node configuration, and you can configure the default node just to enable it
  644. lovetox but this would probably break every client
  645. Syndace heh 😀
  646. Syndace sounds like something that might be a solution for OMEMO but I don't know enough about PEP/pubsub to push that idea forward
  647. lovetox what we really would need is a smart idea how we can avoid notifications all together if we already received it
  648. lovetox Syndace, you say solution like there is a problem
  649. Zash server-side device tracking?
  650. Zash pubsub-since?
  651. lovetox omemo and all other pep based xeps work fine
  652. lovetox its just not scaleable indefinitely
  653. Zash it doesn't have to tho, humans don't scale that well either
  654. Syndace "problem" is a strong word, but e.g. we don't put the ik into the device list because it's too big, so you have to query the bundle manually for every new device.
  655. serge90 has left
  656. lovetox openpgp puts the ik into the notification
  657. Syndace and if everybody sets a huge label for all of their devices, you'll notice the traffic probably
  658. lovetox so its not like it isnt already done
  659. serge90 has joined
  660. lovetox if the payload gets to big, you do what the other xeps do
  661. lovetox add a metadata node
  662. lovetox that tells you only the current version
  663. lovetox see 0084
  664. Syndace isn't that exactly what the hash approach would do?
  665. lovetox yes, my example can be implemented and works tomorrow
  666. lovetox yours need support in X server implementations first
  667. lovetox and the result is the same, one is just more elegant
  668. Syndace I think we're drifting away
  669. lovetox why? its exactly what you want, you subscribe to a metdatanode, it always contains only the last hash or version
  670. lovetox and you define in the xep, if the version or hash is outdated, you request the payload node
  671. Syndace yeah sure, we talked about that for OMEMO
  672. Syndace I don't know why we're reiterating it now
  673. lovetox ok if im saying it now, i dont really see where the server could even help us here
  674. Syndace The server could create the hash for us on-the-fly, without the need for an extra metadata node
  675. lovetox but the extra node is on the server
  676. serge90 has left
  677. lovetox for the client nothing changes
  678. Syndace the client has to update the metadata node though
  679. serge90 has joined
  680. lovetox he gets a notification with the hash, and requests afterwards a node
  681. Syndace when it publishes something
  682. lovetox yeah true it has to publish 2 things
  683. lovetox instead of one
  684. lovetox hardley worth a new xep and serverimpl though if you ask me :D
  685. lovetox its not like you publish daily devicelists
  686. Syndace if you do it manually, every XEP has to do it manually. If you can just subscribe to #notify-hash, every client can decide to do it without the XEP even mentioning the possibility
  687. Syndace > its not like you publish daily devicelists the problem is still the PEP update spam you get on connect
  688. lovetox yeah true, as i said its a bit more elegant
  689. Syndace yeah
  690. lovetox Syndace, you also get a pep update spam with notify-hash
  691. Syndace yes, but (in many cases) less :)
  692. lovetox yes as it would be if you use a metadata node :)
  693. Syndace less as in less bytes, not fewer updates
  694. lovetox but ok, if the server does it for us it indeed elegant for the client
  695. pulkomandy has left
  696. Jae has left
  697. Jae has joined
  698. Syndace yeah. And it's easier than payloadless (is it?), because we can keep using one item with id "current" and don't have to rely on max-items (why not?).
  699. serge90 has left
  700. serge90 has joined
  701. lovetox the problem with payloadless is its a configuration on the node
  702. lovetox so you have to get all servers to have this configured
  703. lovetox which does not make much sense in other cases
  704. Zash `pubsub#deliver_payloads`
  705. Zash Hm, I wondered why something wasn't (also) a subscription option. Maybe this was it.
  706. Syndace but the device list is its own node, isn't it? so you could just set that for the device list?
  707. pulkomandy has joined
  708. Zash Did you not kill the 1+n node scheme?
  709. lovetox Syndace, node configuration is not nice with client
  710. Syndace we have two nodes, one with the device list and one with the bundles
  711. lovetox first you have to pull the node configuration, then you have to set a new one
  712. lovetox then you have to publish
  713. Syndace the bundles used to be split into n
  714. lovetox this is theroretically possible with publish_options, server support this only partly
  715. Zash single device id item or one per device?
  716. Syndace single
  717. Syndace two nodes in total
  718. Zash lovetox, easily solvable, just make the Conversations Compliance checker cry loudly about it
  719. Syndace ah items, yeah one item with the list
  720. Zash Hm
  721. Syndace lovetox we already require setting pubsub#max_items
  722. Syndace so might also require the other thing
  723. Syndace Zash, I think PEP only notifies about the newest item? That's why we want the whole list to be one item.
  724. Zash Another unshaved yak :/
  725. lovetox Syndace, max items is supported by publish options on most servers
  726. Syndace Why? 😀
  727. lovetox other node configurations are not
  728. Zash If you could somehow ensure that you get all of the items, it'd be cleaner
  729. Syndace (the Why was @Zash)
  730. lovetox but yeah if the option is not set, its not bad
  731. serge90 has left
  732. lovetox then you get the payload
  733. Zash And then you could use retractions to indicate device removal
  734. serge90 has joined
  735. Zash Cleaner mapping
  736. lovetox retractions are not sent on coming online
  737. lovetox the one thing per item approach is good for stuff on your own server
  738. Zash https://xmpp.org/extensions/xep-0312.html uses relative time? 😱️
  739. lovetox like bookmarks, which you want to request anyway on every start
  740. flow Syndace, FYI https://wiki.xmpp.org/web/XEP-Remarks/XEP-0373:_OpenPGP_for_XMPP
  741. flow so yes pubsub#deliver_payloads would be the way to go, the wiki page has a note about that feature being not discoverable though
  742. Syndace cool! thanks for the link.
  743. flow I think I had the split metadata and data scheme in mind becaue that is what works with any minimal PEP implementation
  744. Zash Like 84?
  745. flow Zash, searching for devlier_payloads in 84 yields no results
  746. Zash it has split metadata and data tho
  747. flow and since I don't have any detail of every protocol in mind, I would appreciate what exactly
  748. flow aah ok
  749. flow I also lookinto into my notes and found a todo item regarding OK about deliver_payloads
  750. flow I also looked into into my notes and found a todo item regarding OK about deliver_payloads
  751. flow Syndace> payloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though, Do you really have to set the ID explicitly? Often it is enough to go with the pubsub service generated one
  752. flow soo good news, I don't have to write a protoxep, xmpp already provides what we need, we just have to implemented it in services and clients
  753. flow and I can go in my hammock
  754. flow Zash, actually I wonder if that split should be declared an anti pattern
  755. lovetox before you consider something an anti pattern you should at least provide a different approach to reach the same goal
  756. Zash flow: Mmmm, borderline. I personally think the (old) OMEMO thing with 1+n nodes was worse. But if it works, it gets deployed.
  757. Syndace flow actually there is a small but meaningful difference between +notify-hash and payloadless: payloadless has to be configured on the node while +notify-hash can be used on any node if the client wants to
  758. Syndace +notify-payloadless would be amazing too
  759. Zash You could invent that
  760. serge90 has left
  761. Zash Would be easier if it was implemented as a subscription option tho :/
  762. Zash and specced as one
  763. serge90 has joined
  764. Syndace and payloadless should probably be made optional with a disco feature to reflect the current state of server implementations
  765. Zash Is there a feature for it?
  766. Syndace > the feature is not discoverable, most likely because it appears to be mandatory by XEP-0060
  767. Ge0rG has left
  768. Syndace from https://wiki.xmpp.org/web/XEP-Remarks/XEP-0373:_OpenPGP_for_XMPP
  769. Zash It there a feature for `pubsub#deliver_payloads` I mean
  770. Syndace if https://xmpp.org/registrar/disco-features.html is the list of features then no, can't find anything for "deliver_payloads"
  771. serge90 has left
  772. serge90 has joined
  773. Syndace Anyway, the situation is quite clear, we can't rely on any of that for OMEMO. If we want to reduce the on-connect PEP update traffic, we have to manually specify some sort of metadata node.
  774. Zash Account level subscriptions + MAM? :)
  775. Syndace Any I think we agreed that it's not worth the effort given that the device list node is rather small generally
  776. Syndace I don't think a hard dependency on MAM is a good idea just for that
  777. Ge0rG has joined
  778. serge90 has left
  779. serge90 has joined
  780. Syndace how does account level subscription work? you subscribe using '60 instead of +notify and then you receive updates as messages that are stored in MAM while you're offline?
  781. Zash Syndace, you subscribe your account, notifications are sent there and could /in theory/ be forked (instead of the origin sending to each +notify) and MAM'd
  782. Zash In practice those notifications will just be discarded because type=headline
  783. Syndace ah, right
  784. Zash Could be solved by some future magic XEP probably
  785. Zash IM NG might help actually
  786. Zash Also possible to configure notifications to be sent as type=chat, but that's a node config, not subscription option :(
  787. Syndace meh
  788. Zash More of these fancy things as subscription options would be awesome
  789. Zash So each subscriber decides
  790. serge90 has left
  791. serge90 has joined
  792. pulkomandy has left
  793. pulkomandy has joined
  794. asterix has left
  795. asterix has joined
  796. flow Syndace, yes, but do most xeps, including OMEMO not already specify how nodes should be configured? so I am not sure about how meaningful the difference is in this case
  797. flow I think we should probably tackle this from two angles: configuring the node to not deliver payloads *and* invent +notify-payloadless
  798. pulkomandy has left
  799. larma flow, how would you introduce +notify-payloadless to a federated network?
  800. Zash larma, haha .. :(
  801. larma well the problem is that it's not relying on your server to be updated, but on every server to be updated
  802. Zash larma, you do both +notify and +notify-payloadless and the receiver needs to ignore the former?
  803. flow larma, ^
  804. larma So as a client I get mixed responses, sometimes payloadless and soemtimes not?
  805. larma depending on what the other ends servers supported
  806. flow well ideally the node is also configured to no deliver payloads
  807. flow I actually think that this should be enough
  808. Zash Alternatively, mandate local server magic
  809. Zash Your server could easily hash the content (/me laughs in xml c14n) and forward you the hash
  810. flow Zash, I don't think c14n is relevant here
  811. larma If we do local server magic, I'd rather go full server magic and do global pep versioning
  812. Zash Oh
  813. Zash I'm confusing the hash stuff with the payloadless stuff
  814. Zash nm me
  815. Zash Should be trivial for a server to strip the payload
  816. flow even there, do not think of it as a hash, but an Abstract (PubSub) Node Data ID
  817. flow for which is true that if the node data changes that abstract id changes to
  818. Zash I thought I saw some talk about hashing the payload data somehow
  819. flow but it is not important that "similar" xml leads to the same id
  820. flow infact, if the data does not change, but there is a new id, that would be fine too
  821. flow one could even implement that abstract (PubSub) Node Data ID as counter
  822. Jae has left
  823. flow i.e. it is the same id that xep395 would use
  824. Zash Anyways, a server seeing a pubsub notification that includes a payload, but the client has set +notify-payloadless then it should be easy to strip the payload and forward the rest
  825. Zash IIRC payloadless notifications still include the id, so if you stick a magic hash there you should be golden
  826. flow why is sticking a magic hash in there important?
  827. flow couldn't you just use the, you know, item id?
  828. larma flow, if item id is just 'current' all the time, that's not very helpful
  829. flow larma, it has not to be that way
  830. flow isn't current just because singleton node?
  831. Zash yes
  832. flow isn't 'current' just because singleton node?
  833. Jae has joined
  834. larma yeah, but then taking a hash instead of a random number is a good idea, because changing back and forth will result in same id so no unnecessary requests for those that were not online in between
  835. flow now the question is if it is possible to keep the singleton semantic but have differnt IDs for different items, which seems desireable anyway
  836. Zash yes, but you need max_items
  837. flow larma, it is a good idea without doubt, but it is not strictly required
  838. flow Zash, and we do not have max_items? or is there no issue?
  839. Zash I think we do, but there's some extra care involved
  840. Zash Older prosody doesn't, but also doesn't support >1 items so it's fine.
  841. flow so, use max_items=1, delivery_payloads=false, service generated item id, → $$$
  842. larma can't we just build mam for pubsub instead, maybe using some shorthand +notify-archive which will cause the server to automatically subscribe to nodes and deliver updates from the archive when connecting?
  843. Zash 🥇️
  844. Zash larma, any question including the word "just" automatically get "no" for an answer
  845. Zash It's never "just" anything :P
  846. larma 😀
  847. flow life would be boring if things where that easy
  848. larma It's not about doing something that's easy
  849. larma It's rather about doing something that's meaningful
  850. Zash larma, pubsub+mam has been mentioned in the past as the glorious saviour of everything, but it's lacking in actual specification or somesuch
  851. Zash lacking in "how should it even work?"
  852. Zash I think MattJ had some stuff to say about it recently
  853. larma replacing `<item id='current'><devices><device id='1' /><device id='2' /></devices></item>` with `<item id='b5fec669aca110f1505934b1b08ce2351072d16b' />` isn't really a huge improvement IMO
  854. Zash How many phones and computers do normal people even have?
  855. larma Sure it's some improvement, but it still means O(n*m) traffic on connect (n = number of contacts, m = number of features that use pep)
  856. larma I alsways calculate with 3, but I feel it's probably rather 1.7 or something
  857. larma many don't even use their notebooks/desktops for messaging at all
  858. Zash I heard computer sales was picking up because everyone needed to work from home :)
  859. larma "I was sending SMS from my phone for the last 25 years, I'll continue to do so"
  860. lovetox thats what i said earlier, the whole idea makes it a bit more efficient, but in the whole great scheme of things, where everybody stuffs all into pep, it does not really matter
  861. lovetox but nevertheless it would be more elegant, and xeps would not need to define metadata nodes anymore
  862. Zash lovetox, have you heard of https://en.wikipedia.org/wiki/Jevons_paradox ? :)
  863. serge90 has left
  864. pulkomandy has joined
  865. Jae has left
  866. lovetox no i did not, but now i know
  867. Jae has joined
  868. asterix has left
  869. asterix has joined
  870. asterix has left
  871. asterix has joined
  872. Wojtek has left
  873. Wojtek has joined
  874. serge90 has joined
  875. lovetox has left
  876. jcbrand has left
  877. Syndace should not forget in the size comparison that there are labels
  878. Syndace and clients will probably set default labels of a few chars
  879. Zash Labels?
  880. Jae has left
  881. Syndace optional labels (=strings) for devices
  882. Syndace to make it easier to identify keys
  883. Syndace e.g. Gajim could set "Gajim on Windows" for its default label
  884. Jae has joined
  885. sonny has left
  886. SouL has left
  887. strar has left
  888. paul has left
  889. paul has joined
  890. strar has joined
  891. strar has left
  892. strar has joined
  893. serge90 has left
  894. serge90 has joined
  895. Jae has left
  896. Jae has joined
  897. Jae has left
  898. Jae has joined
  899. pulkomandy has left
  900. alexis has left
  901. serge90 has left
  902. serge90 has joined
  903. serge90 has left
  904. serge90 has joined
  905. pulkomandy has joined
  906. adrien has left
  907. adrien has joined
  908. pulkomandy has left
  909. pulkomandy has joined
  910. Jae has left
  911. Jae has joined
  912. asterix has left
  913. asterix has joined
  914. serge90 has left
  915. serge90 has joined
  916. jae has left
  917. serge90 has left
  918. serge90 has joined
  919. serge90 has left
  920. DebXWoody has left
  921. SouL has joined
  922. pulkomandy has left
  923. Jae has left
  924. Jae has joined
  925. goffi has left
  926. alexis has joined
  927. Marc has left
  928. Marc has joined
  929. sonny has joined
  930. Wojtek has left
  931. Wojtek has joined
  932. pulkomandy has joined
  933. Jae has left
  934. Jae has joined
  935. Jae has left
  936. Jae has joined
  937. pulkomandy has left
  938. pulkomandy has joined
  939. asterix has left
  940. asterix has joined
  941. Jae has left
  942. Jae has joined
  943. asterix has left
  944. Jae has left
  945. Marc has left
  946. paul has left
  947. kikuchiyo has left
  948. wurstsalat has left
  949. kikuchiyo has joined
  950. Wojtek has left