jdev - 2020-04-17


  1. lovetox

    hm lib dev question

  2. lovetox

    if a lib has a JID object, and has a getBare() method

  3. lovetox

    what should it return if the jid is only a domain for example "asd.com"

  4. lovetox

    would be wrong to return asd.com

  5. lovetox

    should it raise an exception, NoBareJID or something?

  6. Link Mauve

    lovetox, why would it be wrong? asd.com is a valid bare JID.

  7. lovetox

    or is the localpart not mandatory on a bare jid

  8. lovetox

    ah ok nice

  9. Link Mauve

    asd.com/foo is a valid full JID.

  10. lovetox

    so barejid, is just without resource

  11. Link Mauve

    Yes.

  12. lovetox

    ah k thanks

  13. jonas’

    that ^

  14. flow

    lovetox, http://jxmpp.org/releases/0.7.0-alpha5/javadoc/org/jxmpp/jid/Jid.html https://github.com/igniterealtime/jxmpp#jxmpp-jid

  15. lovetox

    how likely is it that 2 different hash mechanisms produce the same hash

  16. lovetox

    like entity caps its mostly sha-1

  17. lovetox

    but theoretically you can also use something else

  18. lovetox

    right now i store the hash mechanism beside the hash

  19. Link Mauve

    lovetox, XEP-0390 fixed this issue AFAIK.

  20. lovetox

    but i wonder if i can just not store the hash mech

  21. jonas’

    lovetox, it is unlikely. but it’s stupid to not store the mechanism, too.

  22. jonas’

    it only costs a few bytes

  23. Link Mauve

    Can get down to a single byte if you convert the string into an enum.

  24. Link Mauve

    Assuming you know the mechanisms you can use.

  25. jonas’

    there was a thing...

  26. lovetox

    its not about storage cost

  27. lovetox

    i have a 115 cache

  28. lovetox

    which constists of hashmethod/hash -> discoinfo

  29. lovetox

    but then there are entitys we have to query and cache which provide no hash or have no presence at all

  30. lovetox

    like for example a muc

  31. lovetox

    i need to also store these disco info data

  32. lovetox

    but its hard to use the same cache

  33. lovetox

    because for muc i need a JID -> DiscoInfo kind of cache

  34. lovetox

    and that i need 2 different caches for discoinfo data .. somehow i dont like it

  35. flow

    but that's how it is

  36. jonas’

    lovetox, in aioxmpp, we listen on presence info and prime the JID -> DiscoInfo cache from the hash -> discoinfo cache

  37. jonas’

    that means that disco

  38. flow

    or, are you thinking about a generic String→ disco#info cache?

  39. jonas’

    that means that discoinfo lookups themselves only ever use the JID -> DiscoInfo cache

  40. flow

    guess one could do that, but I'd personally wouldn't do so

  41. jonas’

    which is pre-populated on the fly by the entitycaps component which listens for presence

  42. flow

    especially since you could persist the caps cache to disk, while the jid→disco#info cache is somewhat unreliable and should have a sensible timeout (smack uses 24h)

  43. lovetox

    ah nice so when you need caps, you always access only the JID -> Disco info cache

  44. lovetox

    thats exactly what i was searching for

  45. jonas’

    lovetox, correct

  46. lovetox

    thanks

  47. flow

    jonas’, does that jid→disco#info cache only include values obtained via caps?

  48. jonas’

    flow, no

  49. flow

    or do you also put in results of actual disco#info requests?

  50. jonas’

    in fact, the jid->disco#info cache is in reality a JID -> Future(disco#info) cache

  51. lovetox

    yes thats the goal flow, often i have to disco info instances who dont have presence

  52. lovetox

    and i save that in a cache and store to disk

  53. jonas’

    lovetox, I’m not sure storing the jid->disco#info cache on disk is a good idea

  54. flow

    store to disk ephemeral data?

  55. lovetox

    for example a transport which im not subscribed to, if i load my roster i want to know the identity type so i can display icons that match

  56. jonas’

    that ^

  57. jonas’

    right, but treat it as stale and look it up at some point if you’ve loaded it from disk

  58. lovetox

    of course, you have have a last seen attr and perodically request again

  59. jonas’

    :+1:

  60. lovetox

    all disco info is stale the second you received it, so this has nothing to do with application restarts

  61. jonas’

    sure, but the longer you wait, the staler it gets :)

  62. flow

    plus with caps+presence you will get push notifications if disco#info changes

  63. jonas’

    flow, not for MUC services for example

  64. flow

    and can then react on that and invalide your cache etc

  65. jonas’

    which is what he’s talking about

  66. lovetox

    flow we are talking specially about entitys that have no presence

  67. flow

    I know. I just wanted to state that there is a difference between disco#info data with caps and that without

  68. flow

    for those cases like MUC, smack has a cache which expires its entries after 24h

  69. flow

    but actually, I am always pondering with that

  70. flow

    assume we one day live in a word where the xmpp service address will host most services (which I think is desireable for the services where it is possible). and your service operator updates the service, then you will potentially only become aware of that new feature after 24h

  71. flow

    stream:features to the rescue

  72. jonas’

    it’d be nice to have 115/390-like push for services, too

  73. flow

    wouldn't that be the case if you are subscribed to the presence?

  74. lovetox

    for muc this somewhat exists

  75. flow

    they just have to add xep115/390 metadata

  76. lovetox

    its called notification of configuration change

  77. lovetox

    i always disco after it

  78. lovetox

    because the notification does not tell you what changed

  79. jonas’

    flow, you’re assuming that all services expose presence

  80. flow

    so it is probably not a issue of a spec filling a hole, but implementations just doing that

  81. flow

    jonas’, well services need to know the subscriped entities to push 115/390 data to

  82. jonas’

    that is true

  83. jonas’

    I question whether that needs to be presence though

  84. flow

    and I think there is nothing wrong with just re-using presence for that?

  85. flow

    I can see the argumented of a polluted roster

  86. jonas’

    I think there’s some reason in not using presence for this at all

  87. flow

    but I wouldn't buy it

  88. jonas’

    that, too

  89. jonas’

    not for service-to-client, but for client-to-client presence, it quickly gets expensive to have all that stuff in the presence stanza

  90. jonas’

    so I wonder whether more fine-grained notification models wouldn’t make more sense

  91. flow

    hmm I'm sorry I can't follow, we where talking about services using caps to "push" diso#info to interested parties, and now you switch to c2s presences?

  92. jonas’

    flow, yes

  93. jonas’

    I’m questioning using presence for caps

  94. jonas’

    because presence is overloaded

  95. jonas’

    this is mostly relevant for c2s presences

  96. jonas’

    (or, more specifically, for client-to-client presences)

  97. flow

    so you basically want to re-open the old discussion between peter and phippo about presence floods? ;)

  98. jonas’

    maybe :)

  99. flow

    I mean clients do not change their disco#info feature often, do they?

  100. jonas’

    exactly

  101. jonas’

    that’s why *not* having this in presence would be good

  102. flow

    I mean clients do not change their disco#info often, do they?

  103. flow

    please elaborate, because I think having a client specific caps in presence is potentially the only thing sensible in presence these days

  104. jonas’

    avatar hashes exist too

  105. jonas’

    GPG also

  106. flow

    well those should probably go in PEP

  107. jonas’

    exactly

  108. flow

    and openpgp is already in PEP

  109. jonas’

    I still see it in presence stanzas

  110. flow

    (if you use a modern XEP ;;)

  111. jonas’

    just like avatar hashes

  112. jonas’

    yeah, well

  113. jonas’

    the question is, if we’ve gone through the effort to move this rarely-changing data out of <presence/>, shouldn’t we also move that other rarely-changing data (ecaps) out of presence?

  114. jonas’

    what is the rationale for keeping it there?

  115. flow

    ok, so from a protocl PoV we are fine (at least in these areas), seems to be more of an implementation-is-missing issue

  116. flow

    jonas’, ahh, I was no thinking that the frequency of change should be a criterion here

  117. flow

    I was more thinking of "is this client specific" as criterion

  118. jonas’

    aha

  119. flow

    I don't think you want different avatars for different clients

  120. jonas’

    I was coming from the "rarely-changing data in a stanza which is often sent is a waste of bandwidth" angle

  121. flow

    (course, if you ask enough people, some people will say they want this…)

  122. jonas’

    yeah, the per-client-ness is an argument pro presence

  123. jonas’

    though we’ve already had enough arguments for the case that per-client caps are rarely useful and most of the time you’ll need something like an aggregated caps over all clients of the user (both min and max aggregates)

  124. jonas’

    and those caps could be distributed by the server in a non-presence vehicle and also contain full caps hashes for the individual clients which are currently online.

  125. flow

    yes, but, even if per-client caps are rarely useful, which I do not know if I would aggree, I do not see this as argument to remove them

  126. flow

    of course, what we have discussed regarding per-account caps appears still desireable

  127. flow

    and we should move towards that

  128. jonas’

    even if per-client caps are useful (which they sometimes are, I agree), the question is whether they belong in presence

  129. flow

    and maybe, just maybe, we will discovered that per client caps are no longer needed, but then they will probably vanish automatically

  130. flow

    yes, but I am not sure if this is the question we should answer right now

  131. flow

    it appears as something deeply baked into the core of xmpp

  132. jonas’

    not really, it’s in '115

  133. jonas’

    that’s not that deep

  134. flow

    and since they rarely change, i feel like it is not worth any effort getting rid of them

  135. flow

    but if you want to work on a spec which puts those into something else PEP, then please do

  136. flow

    but if you want to work on a spec which puts those into something else (PEP?), then please do

  137. jonas’

    no, that they rarely change is a reason to move them out of presence

  138. jonas’

    if they changed "sometimes", presence would be a good place. if they changed "often", presence would be a terrible placce.

  139. jonas’

    if they change "rarely", they are dead weight in most of the presence broadcasts which happen

  140. jonas’

    (of course, if they change "often", they cause unnecessary presence broadcasts, which is arguably much worse)

  141. jonas’

    flow, yeah, I’ve been pondering that for '390, which is why I take the opportunity to discuss this

  142. flow

    ahh you not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence

  143. flow

    ahh you are not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence

  144. flow

    ahh you are not worried about caps triggering additional presence broadcasts, but the mere existence of caps in every presence

  145. jonas’

    exactly

  146. flow

    tbh i never thought of this is something of a heavy burder

  147. flow

    tbh i never thought of this is something of a heavy burden

  148. jonas’

    it gets heavier when you introduce hash agility (like '390) and modern hash functions

  149. flow

    I personally wouldn't invest time to improve here

  150. jonas’

    stuff gets more and longer

  151. flow

    true, but I do not think that we will change hashes often

  152. jonas’

    I’m not so sure of that

  153. flow

    sha1 has served us well for what? a decade or so?

  154. jonas’

    and even if we don’t change them *often*, the transition period may well be a decade of sending two hashes with each presence

  155. jonas’

    because oldstable

  156. flow

    obviously

  157. flow

    but if I had to guess i'd say we see 4-5 caps variants per presence at most

  158. jonas’

    which is quite a lot

  159. flow

    which surely is not optimal, but something I could sleep with

  160. flow

    jonas’, if you really want to reduce wire bytes invent an XML extension where the end element is always empty ;)

  161. jonas’

    ITYM EXI

  162. Syndace

    alright this is getting WAY to close to what we discussed for OMEMO just minutes ago, I have to jump in with something slightly off-topic

  163. Syndace

    We have the problem that for OMEMO, you subscribe to the PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP nodes on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀

  164. Syndace

    We have the problem that for OMEMO, you subscribe to a PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP updates on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀

  165. jonas’

    Syndace, EXI needs to be done on the stream level

  166. flow

    Syndace, a common pattern is to split PEP data into a hash and the actual data

  167. jonas’

    it would be cool if PEP services could do that

  168. jonas’

    that would solve the race condition issues around that

  169. Syndace

    Nah, EXI is just compression for XML, not talking about the EXI XEP but the EXI technology

  170. flow

    that way you only get the hashes on connect, which may already helps a lit

  171. jonas’

    Syndace, EXI generates binary output though, I’m not sure you lose 99% of the advantages if you have to wrap it in base64 again.

  172. Syndace

    Yeah that would actually help a lot

  173. flow

    not sure if this is possible in your case though, would need to have a closer look

  174. jonas’

    and if every client comes with an EXI implementation, we’re half way to being able to use EXI on c2s, which would be amazing

  175. Syndace

    Did a bit of research on available EXI implementations. It doesn't look super good but there are open implementations for C, Java, JavaScript and C#, though I can't say anything about the quality of those. Seem maintained at least.

  176. jonas’

    I hadn’t checked that far yet

  177. jonas’

    I only checked if libxml supports it (which it doesn’t)

  178. flow

    as much as like EXI, I am sceptic if this is the right solution to your problem

  179. Syndace

    flow I think that should be possible for OMEMO device lists, I'll mention it as a possible solution, thanks 🙂

  180. Syndace

    EXI could be used to compress bodies in general too, not only for the PEP node content

  181. jonas’

    we should really discuss letting PEP services hash the contents of nodes

  182. Syndace

    And we encrypt the bodies as binary data anyway (SCE), so we don't have to base64 stuff there

  183. jonas’

    but that’d require c14n, and nobody wants to go near that in XML :)

  184. flow

    Syndace, thay them the OpenPGP XEP said hello (that is where the idea came from)

  185. jonas’

    Syndace, yes

  186. flow

    Syndace, say them the OpenPGP XEP said hello (that is where the idea came from)

  187. Syndace

    ...should really read the openpgp xep again 😀

  188. flow

    jonas’> we should really discuss letting PEP services hash the contents of nodes +1

  189. Syndace

    yeah that would be awesome

  190. flow

    a generic optional mechanism where the push only contains "a hash" would be nice

  191. flow

    could potentially be as easy as s/+notify/+notify-hash/

  192. flow

    great, now I wanna write a new protoxep

  193. flow

    when I actually wanted to go into the hammock

  194. Martin

    You can't do both? Write in the hammock?

  195. jonas’

    flow, go ahead

  196. Syndace

    > could potentially be as easy as s/+notify/+notify-hash/ damn that sounds soo good

  197. jonas’

    +notify-hash-hash-hash-hash-hash-ha…

  198. larma

    flow, why not versioning instead of hash

  199. larma

    after all, a few hundred hash nodes that are the same as last connect also seeems kind of wasted...

  200. jonas’

    larma, what would the advantage be?

  201. larma

    jonas’, well if I have >100 contacts, each of them use(d) omemo,avatar,microblog,... and when connecting I +notify-hash, I still receive a few hundred hashes. And often enough those are just the same hash as last time I connected.

  202. larma

    With some versioning scheme it could be done such that I only get the changes since last connect

  203. jonas’

    larma, right

  204. jonas’

    I thought versioning per node

  205. jonas’

    where you wouldn’t win anything over hashes

  206. jonas’

    versioning per contact or globally (by your local server) would win of course

  207. larma

    true, yeah was considering global versioning

  208. Syndace

    +notify-hash-$LASTHASH

  209. Syndace

    😀

  210. Zash

    Uh, what have I missed here‽

  211. jonas’

    Syndace, you do realize that that immediately causes a loop? :)

  212. jonas’

    ah, no, not a loop

  213. jonas’

    but still terrible things™

  214. Syndace

    no I don't actually?

  215. larma

    Syndace, that wouldn't work because what would you hash, after all you receive multiple different nodes based on that notify

  216. jonas’

    Syndace, it’s part of your ecaps hash ;)

  217. jonas’

    it doesn’t cause a loop -- that was a mistake on my side -- but it still does fun things. since everytime you receive a pep update, your ecaps hash would change and you’d have to reemit presence

  218. Syndace

    oh right, you +notify for the node and not for each contact you want notifications from

  219. Syndace

    jonas' I wasn't thinking about updating the hash during runtime, just setting it to the last hash you saw before disconnecting last time. Only to avoid the one single automatic update that you receive on connect.

  220. lovetox

    hm are you aware that +notify

  221. lovetox

    hm are you aware that +notify-hash would change your disco info

  222. lovetox

    just saying that would flood me with disco info requests everytime something changes in pep

  223. lovetox

    and this brings me to the topic that +notify is really bad in disco info

  224. lovetox

    i cant deactivate a feature without changing my disco info

  225. lovetox

    but on the other hand its really the most easy way to communicate to remote servers what you want hmm

  226. Syndace

    "hash" means the word "hash" here, not any actual value. so you'd put "+notify-hash" in disco info exactly the same way you put "+notify" there now. so no disco info change everytime something changes in pep.

  227. lovetox

    then i missed something

  228. lovetox

    how does the server know on what version i am?

  229. Syndace

    I don't think there even was consensus on doing versioning

  230. Syndace

    just a simple hash of the node content

  231. Syndace

    nothing more, nothing less

  232. lovetox

    where is the hash of the node contetn

  233. lovetox

    sent with the pep notification?

  234. Zash

    hash of what?

  235. Syndace

    hash of the pep node content

  236. Zash

    what normalization?

  237. Zash

    what c14n?

  238. Syndace

    whatever flow thinks of 😀

  239. Syndace

    > sent with the pep notification? yeah. instead of getting the content in the notification, you get a hash of the content.

  240. lovetox

    so instead of the actual data i get hundreds of pep notifications that contain a hash

  241. Syndace

    reduced bandwidth and if you need to know the actual content you can manually query

  242. lovetox

    and i have to query more if its not the hash that i have?

  243. Syndace

    yup, instead of 100 device lists, 100 hashes

  244. Zash

    Isn't something like this in XEP-0060?

  245. lovetox

    yes thats already in their

  246. Syndace

    (for example)

  247. Zash

    notifications without payloads at least

  248. lovetox

    its called omit the payload

  249. Syndace

    how does that work with the first update you get when you connect to the server and +notify?

  250. lovetox

    you get a notification just with the item-id without payload

  251. lovetox

    the item-id could be your version or hash

  252. Zash

    cf xep84

  253. lovetox

    but really thats not worth it for something like a device list on omemo

  254. Zash

    I've actually wondered why '84 doesn't just use payloadless notifications instead of a separate node

  255. lovetox

    the payload is small anyway

  256. Syndace

    lovetox well, the node can contain user-defined labels now

  257. Syndace

    so it can be a few times bigger than in legacy omemo

  258. Syndace

    for a few 100 contacts that adds up

  259. Syndace

    at least larma said that the device list notifications already make up a considerable portion of the traffic on connect

  260. larma

    It's probably not the only thing, but it's definitely visible

  261. larma

    haven't actually calculated how much bytes it makes

  262. lovetox

    you can reduce the payload

  263. lovetox

    but this does not change the fact that pubsub in its current form is just not scaleable

  264. lovetox

    it works nice until you reach X users in your roster

  265. lovetox

    then it becomes a burden

  266. larma

    you mean pep, not pubsub

  267. lovetox

    pep is a subset of pubsub

  268. Syndace

    payloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though, like a hash of the content

  269. Syndace

    we use "current" at the moment

  270. lovetox

    then you need to configure the node to max-items=1

  271. Syndace

    and do payloadless notifications work with PEP notifications?

  272. lovetox

    which we sidestep with "current" right now

  273. lovetox

    Syndace, its a node configuration, and you can configure the default node just to enable it

  274. lovetox

    but this would probably break every client

  275. Syndace

    heh 😀

  276. Syndace

    sounds like something that might be a solution for OMEMO but I don't know enough about PEP/pubsub to push that idea forward

  277. lovetox

    what we really would need is a smart idea how we can avoid notifications all together if we already received it

  278. lovetox

    Syndace, you say solution like there is a problem

  279. Zash

    server-side device tracking?

  280. Zash

    pubsub-since?

  281. lovetox

    omemo and all other pep based xeps work fine

  282. lovetox

    its just not scaleable indefinitely

  283. Zash

    it doesn't have to tho, humans don't scale that well either

  284. Syndace

    "problem" is a strong word, but e.g. we don't put the ik into the device list because it's too big, so you have to query the bundle manually for every new device.

  285. lovetox

    openpgp puts the ik into the notification

  286. Syndace

    and if everybody sets a huge label for all of their devices, you'll notice the traffic probably

  287. lovetox

    so its not like it isnt already done

  288. lovetox

    if the payload gets to big, you do what the other xeps do

  289. lovetox

    add a metadata node

  290. lovetox

    that tells you only the current version

  291. lovetox

    see 0084

  292. Syndace

    isn't that exactly what the hash approach would do?

  293. lovetox

    yes, my example can be implemented and works tomorrow

  294. lovetox

    yours need support in X server implementations first

  295. lovetox

    and the result is the same, one is just more elegant

  296. Syndace

    I think we're drifting away

  297. lovetox

    why? its exactly what you want, you subscribe to a metdatanode, it always contains only the last hash or version

  298. lovetox

    and you define in the xep, if the version or hash is outdated, you request the payload node

  299. Syndace

    yeah sure, we talked about that for OMEMO

  300. Syndace

    I don't know why we're reiterating it now

  301. lovetox

    ok if im saying it now, i dont really see where the server could even help us here

  302. Syndace

    The server could create the hash for us on-the-fly, without the need for an extra metadata node

  303. lovetox

    but the extra node is on the server

  304. lovetox

    for the client nothing changes

  305. Syndace

    the client has to update the metadata node though

  306. lovetox

    he gets a notification with the hash, and requests afterwards a node

  307. Syndace

    when it publishes something

  308. lovetox

    yeah true it has to publish 2 things

  309. lovetox

    instead of one

  310. lovetox

    hardley worth a new xep and serverimpl though if you ask me :D

  311. lovetox

    its not like you publish daily devicelists

  312. Syndace

    if you do it manually, every XEP has to do it manually. If you can just subscribe to #notify-hash, every client can decide to do it without the XEP even mentioning the possibility

  313. Syndace

    > its not like you publish daily devicelists the problem is still the PEP update spam you get on connect

  314. lovetox

    yeah true, as i said its a bit more elegant

  315. Syndace

    yeah

  316. lovetox

    Syndace, you also get a pep update spam with notify-hash

  317. Syndace

    yes, but (in many cases) less :)

  318. lovetox

    yes as it would be if you use a metadata node :)

  319. Syndace

    less as in less bytes, not fewer updates

  320. lovetox

    but ok, if the server does it for us it indeed elegant for the client

  321. Syndace

    yeah. And it's easier than payloadless (is it?), because we can keep using one item with id "current" and don't have to rely on max-items (why not?).

  322. lovetox

    the problem with payloadless is its a configuration on the node

  323. lovetox

    so you have to get all servers to have this configured

  324. lovetox

    which does not make much sense in other cases

  325. Zash

    `pubsub#deliver_payloads`

  326. Zash

    Hm, I wondered why something wasn't (also) a subscription option. Maybe this was it.

  327. Syndace

    but the device list is its own node, isn't it? so you could just set that for the device list?

  328. Zash

    Did you not kill the 1+n node scheme?

  329. lovetox

    Syndace, node configuration is not nice with client

  330. Syndace

    we have two nodes, one with the device list and one with the bundles

  331. lovetox

    first you have to pull the node configuration, then you have to set a new one

  332. lovetox

    then you have to publish

  333. Syndace

    the bundles used to be split into n

  334. lovetox

    this is theroretically possible with publish_options, server support this only partly

  335. Zash

    single device id item or one per device?

  336. Syndace

    single

  337. Syndace

    two nodes in total

  338. Zash

    lovetox, easily solvable, just make the Conversations Compliance checker cry loudly about it

  339. Syndace

    ah items, yeah one item with the list

  340. Zash

    Hm

  341. Syndace

    lovetox we already require setting pubsub#max_items

  342. Syndace

    so might also require the other thing

  343. Syndace

    Zash, I think PEP only notifies about the newest item? That's why we want the whole list to be one item.

  344. Zash

    Another unshaved yak :/

  345. lovetox

    Syndace, max items is supported by publish options on most servers

  346. Syndace

    Why? 😀

  347. lovetox

    other node configurations are not

  348. Zash

    If you could somehow ensure that you get all of the items, it'd be cleaner

  349. Syndace

    (the Why was @Zash)

  350. lovetox

    but yeah if the option is not set, its not bad

  351. lovetox

    then you get the payload

  352. Zash

    And then you could use retractions to indicate device removal

  353. Zash

    Cleaner mapping

  354. lovetox

    retractions are not sent on coming online

  355. lovetox

    the one thing per item approach is good for stuff on your own server

  356. Zash

    https://xmpp.org/extensions/xep-0312.html uses relative time? 😱️

  357. lovetox

    like bookmarks, which you want to request anyway on every start

  358. flow

    Syndace, FYI https://wiki.xmpp.org/web/XEP-Remarks/XEP-0373:_OpenPGP_for_XMPP

  359. flow

    so yes pubsub#deliver_payloads would be the way to go, the wiki page has a note about that feature being not discoverable though

  360. Syndace

    cool! thanks for the link.

  361. flow

    I think I had the split metadata and data scheme in mind becaue that is what works with any minimal PEP implementation

  362. Zash

    Like 84?

  363. flow

    Zash, searching for devlier_payloads in 84 yields no results

  364. Zash

    it has split metadata and data tho

  365. flow

    and since I don't have any detail of every protocol in mind, I would appreciate what exactly

  366. flow

    aah ok

  367. flow

    I also lookinto into my notes and found a todo item regarding OK about deliver_payloads

  368. flow

    I also looked into into my notes and found a todo item regarding OK about deliver_payloads

  369. flow

    Syndace> payloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though, Do you really have to set the ID explicitly? Often it is enough to go with the pubsub service generated one

  370. flow

    soo good news, I don't have to write a protoxep, xmpp already provides what we need, we just have to implemented it in services and clients

  371. flow

    and I can go in my hammock

  372. flow

    Zash, actually I wonder if that split should be declared an anti pattern

  373. lovetox

    before you consider something an anti pattern you should at least provide a different approach to reach the same goal

  374. Zash

    flow: Mmmm, borderline. I personally think the (old) OMEMO thing with 1+n nodes was worse. But if it works, it gets deployed.

  375. Syndace

    flow actually there is a small but meaningful difference between +notify-hash and payloadless: payloadless has to be configured on the node while +notify-hash can be used on any node if the client wants to

  376. Syndace

    +notify-payloadless would be amazing too

  377. Zash

    You could invent that

  378. Zash

    Would be easier if it was implemented as a subscription option tho :/

  379. Zash

    and specced as one

  380. Syndace

    and payloadless should probably be made optional with a disco feature to reflect the current state of server implementations

  381. Zash

    Is there a feature for it?

  382. Syndace

    > the feature is not discoverable, most likely because it appears to be mandatory by XEP-0060

  383. Syndace

    from https://wiki.xmpp.org/web/XEP-Remarks/XEP-0373:_OpenPGP_for_XMPP

  384. Zash

    It there a feature for `pubsub#deliver_payloads` I mean

  385. Syndace

    if https://xmpp.org/registrar/disco-features.html is the list of features then no, can't find anything for "deliver_payloads"

  386. Syndace

    Anyway, the situation is quite clear, we can't rely on any of that for OMEMO. If we want to reduce the on-connect PEP update traffic, we have to manually specify some sort of metadata node.

  387. Zash

    Account level subscriptions + MAM? :)

  388. Syndace

    Any I think we agreed that it's not worth the effort given that the device list node is rather small generally

  389. Syndace

    I don't think a hard dependency on MAM is a good idea just for that

  390. Syndace

    how does account level subscription work? you subscribe using '60 instead of +notify and then you receive updates as messages that are stored in MAM while you're offline?

  391. Zash

    Syndace, you subscribe your account, notifications are sent there and could /in theory/ be forked (instead of the origin sending to each +notify) and MAM'd

  392. Zash

    In practice those notifications will just be discarded because type=headline

  393. Syndace

    ah, right

  394. Zash

    Could be solved by some future magic XEP probably

  395. Zash

    IM NG might help actually

  396. Zash

    Also possible to configure notifications to be sent as type=chat, but that's a node config, not subscription option :(

  397. Syndace

    meh

  398. Zash

    More of these fancy things as subscription options would be awesome

  399. Zash

    So each subscriber decides

  400. flow

    Syndace, yes, but do most xeps, including OMEMO not already specify how nodes should be configured? so I am not sure about how meaningful the difference is in this case

  401. flow

    I think we should probably tackle this from two angles: configuring the node to not deliver payloads *and* invent +notify-payloadless

  402. larma

    flow, how would you introduce +notify-payloadless to a federated network?

  403. Zash

    larma, haha .. :(

  404. larma

    well the problem is that it's not relying on your server to be updated, but on every server to be updated

  405. Zash

    larma, you do both +notify and +notify-payloadless and the receiver needs to ignore the former?

  406. flow

    larma, ^

  407. larma

    So as a client I get mixed responses, sometimes payloadless and soemtimes not?

  408. larma

    depending on what the other ends servers supported

  409. flow

    well ideally the node is also configured to no deliver payloads

  410. flow

    I actually think that this should be enough

  411. Zash

    Alternatively, mandate local server magic

  412. Zash

    Your server could easily hash the content (/me laughs in xml c14n) and forward you the hash

  413. flow

    Zash, I don't think c14n is relevant here

  414. larma

    If we do local server magic, I'd rather go full server magic and do global pep versioning

  415. Zash

    Oh

  416. Zash

    I'm confusing the hash stuff with the payloadless stuff

  417. Zash

    nm me

  418. Zash

    Should be trivial for a server to strip the payload

  419. flow

    even there, do not think of it as a hash, but an Abstract (PubSub) Node Data ID

  420. flow

    for which is true that if the node data changes that abstract id changes to

  421. Zash

    I thought I saw some talk about hashing the payload data somehow

  422. flow

    but it is not important that "similar" xml leads to the same id

  423. flow

    infact, if the data does not change, but there is a new id, that would be fine too

  424. flow

    one could even implement that abstract (PubSub) Node Data ID as counter

  425. flow

    i.e. it is the same id that xep395 would use

  426. Zash

    Anyways, a server seeing a pubsub notification that includes a payload, but the client has set +notify-payloadless then it should be easy to strip the payload and forward the rest

  427. Zash

    IIRC payloadless notifications still include the id, so if you stick a magic hash there you should be golden

  428. flow

    why is sticking a magic hash in there important?

  429. flow

    couldn't you just use the, you know, item id?

  430. larma

    flow, if item id is just 'current' all the time, that's not very helpful

  431. flow

    larma, it has not to be that way

  432. flow

    isn't current just because singleton node?

  433. Zash

    yes

  434. flow

    isn't 'current' just because singleton node?

  435. larma

    yeah, but then taking a hash instead of a random number is a good idea, because changing back and forth will result in same id so no unnecessary requests for those that were not online in between

  436. flow

    now the question is if it is possible to keep the singleton semantic but have differnt IDs for different items, which seems desireable anyway

  437. Zash

    yes, but you need max_items

  438. flow

    larma, it is a good idea without doubt, but it is not strictly required

  439. flow

    Zash, and we do not have max_items? or is there no issue?

  440. Zash

    I think we do, but there's some extra care involved

  441. Zash

    Older prosody doesn't, but also doesn't support >1 items so it's fine.

  442. flow

    so, use max_items=1, delivery_payloads=false, service generated item id, → $$$

  443. larma

    can't we just build mam for pubsub instead, maybe using some shorthand +notify-archive which will cause the server to automatically subscribe to nodes and deliver updates from the archive when connecting?

  444. Zash

    🥇️

  445. Zash

    larma, any question including the word "just" automatically get "no" for an answer

  446. Zash

    It's never "just" anything :P

  447. larma

    😀

  448. flow

    life would be boring if things where that easy

  449. larma

    It's not about doing something that's easy

  450. larma

    It's rather about doing something that's meaningful

  451. Zash

    larma, pubsub+mam has been mentioned in the past as the glorious saviour of everything, but it's lacking in actual specification or somesuch

  452. Zash

    lacking in "how should it even work?"

  453. Zash

    I think MattJ had some stuff to say about it recently

  454. larma

    replacing `<item id='current'><devices><device id='1' /><device id='2' /></devices></item>` with `<item id='b5fec669aca110f1505934b1b08ce2351072d16b' />` isn't really a huge improvement IMO

  455. Zash

    How many phones and computers do normal people even have?

  456. larma

    Sure it's some improvement, but it still means O(n*m) traffic on connect (n = number of contacts, m = number of features that use pep)

  457. larma

    I alsways calculate with 3, but I feel it's probably rather 1.7 or something

  458. larma

    many don't even use their notebooks/desktops for messaging at all

  459. Zash

    I heard computer sales was picking up because everyone needed to work from home :)

  460. larma

    "I was sending SMS from my phone for the last 25 years, I'll continue to do so"

  461. lovetox

    thats what i said earlier, the whole idea makes it a bit more efficient, but in the whole great scheme of things, where everybody stuffs all into pep, it does not really matter

  462. lovetox

    but nevertheless it would be more elegant, and xeps would not need to define metadata nodes anymore

  463. Zash

    lovetox, have you heard of https://en.wikipedia.org/wiki/Jevons_paradox ? :)

  464. lovetox

    no i did not, but now i know

  465. Syndace

    should not forget in the size comparison that there are labels

  466. Syndace

    and clients will probably set default labels of a few chars

  467. Zash

    Labels?

  468. Syndace

    optional labels (=strings) for devices

  469. Syndace

    to make it easier to identify keys

  470. Syndace

    e.g. Gajim could set "Gajim on Windows" for its default label