lovetoxhow likely is it that 2 different hash mechanisms produce the same hash
lovetoxlike entity caps its mostly sha-1
lovetoxbut theoretically you can also use something else
lovetoxright now i store the hash mechanism beside the hash
Link Mauvelovetox, XEP-0390 fixed this issue AFAIK.
lovetoxbut i wonder if i can just not store the hash mech
jonas’lovetox, it is unlikely. but it’s stupid to not store the mechanism, too.
jonas’it only costs a few bytes
Link MauveCan get down to a single byte if you convert the string into an enum.
Link MauveAssuming you know the mechanisms you can use.
jonas’there was a thing...
lovetoxits not about storage cost
lovetoxi have a 115 cache
lovetoxwhich constists of hashmethod/hash -> discoinfo
lovetoxbut then there are entitys we have to query and cache which provide no hash or have no presence at all
lovetoxlike for example a muc
lovetoxi need to also store these disco info data
lovetoxbut its hard to use the same cache
lovetoxbecause for muc i need a JID -> DiscoInfo kind of cache
lovetoxand that i need 2 different caches for discoinfo data .. somehow i dont like it
flowbut that's how it is
jonas’lovetox, in aioxmpp, we listen on presence info and prime the JID -> DiscoInfo cache from the hash -> discoinfo cache
jonas’that means that disco
flowor, are you thinking about a generic String→ disco#info cache?
jonas’that means that discoinfo lookups themselves only ever use the JID -> DiscoInfo cache
flowguess one could do that, but I'd personally wouldn't do so
jonas’which is pre-populated on the fly by the entitycaps component which listens for presence
flowespecially since you could persist the caps cache to disk, while the jid→disco#info cache is somewhat unreliable and should have a sensible timeout (smack uses 24h)
lovetoxah nice so when you need caps, you always access only the JID -> Disco info cache
lovetoxthats exactly what i was searching for
flowjonas’, does that jid→disco#info cache only include values obtained via caps?
flowor do you also put in results of actual disco#info requests?
jonas’in fact, the jid->disco#info cache is in reality a JID -> Future(disco#info) cache
lovetoxyes thats the goal flow, often i have to disco info instances who dont have presence
lovetoxand i save that in a cache and store to disk
jonas’lovetox, I’m not sure storing the jid->disco#info cache on disk is a good idea
flowstore to disk ephemeral data?
lovetoxfor example a transport which im not subscribed to, if i load my roster i want to know the identity type so i can display icons that match
jonas’right, but treat it as stale and look it up at some point if you’ve loaded it from disk
lovetoxof course, you have have a last seen attr and perodically request again
lovetoxall disco info is stale the second you received it, so this has nothing to do with application restarts
jonas’sure, but the longer you wait, the staler it gets :)
flowplus with caps+presence you will get push notifications if disco#info changes
jonas’flow, not for MUC services for example
flowand can then react on that and invalide your cache etc
jonas’which is what he’s talking about
lovetoxflow we are talking specially about entitys that have no presence
flowI know. I just wanted to state that there is a difference between disco#info data with caps and that without
flowfor those cases like MUC, smack has a cache which expires its entries after 24h
flowbut actually, I am always pondering with that
flowassume we one day live in a word where the xmpp service address will host most services (which I think is desireable for the services where it is possible). and your service operator updates the service, then you will potentially only become aware of that new feature after 24h
flowstream:features to the rescue
jonas’it’d be nice to have 115/390-like push for services, too
flowwouldn't that be the case if you are subscribed to the presence?
lovetoxfor muc this somewhat exists
flowthey just have to add xep115/390 metadata
lovetoxits called notification of configuration change
lovetoxi always disco after it
lovetoxbecause the notification does not tell you what changed
jonas’flow, you’re assuming that all services expose presence
flowso it is probably not a issue of a spec filling a hole, but implementations just doing that
flowjonas’, well services need to know the subscriped entities to push 115/390 data to
jonas’that is true
jonas’I question whether that needs to be presence though
flowand I think there is nothing wrong with just re-using presence for that?
flowI can see the argumented of a polluted roster
jonas’I think there’s some reason in not using presence for this at all
flowbut I wouldn't buy it
jonas’not for service-to-client, but for client-to-client presence, it quickly gets expensive to have all that stuff in the presence stanza
jonas’so I wonder whether more fine-grained notification models wouldn’t make more sense
flowhmm I'm sorry I can't follow, we where talking about services using caps to "push" diso#info to interested parties, and now you switch to c2s presences?
jonas’I’m questioning using presence for caps
jonas’because presence is overloaded
jonas’this is mostly relevant for c2s presences
jonas’(or, more specifically, for client-to-client presences)
flowso you basically want to re-open the old discussion between peter and phippo about presence floods? ;)
flowI mean clients do not change their disco#info feature often, do they?
jonas’that’s why *not* having this in presence would be good
flowI mean clients do not change their disco#info often, do they?
flowplease elaborate, because I think having a client specific caps in presence is potentially the only thing sensible in presence these days
jonas’avatar hashes exist too
flowwell those should probably go in PEP
flowand openpgp is already in PEP
jonas’I still see it in presence stanzas
flow(if you use a modern XEP ;;)
jonas’just like avatar hashes
jonas’the question is, if we’ve gone through the effort to move this rarely-changing data out of <presence/>, shouldn’t we also move that other rarely-changing data (ecaps) out of presence?
jonas’what is the rationale for keeping it there?
flowok, so from a protocl PoV we are fine (at least in these areas), seems to be more of an implementation-is-missing issue
flowjonas’, ahh, I was no thinking that the frequency of change should be a criterion here
flowI was more thinking of "is this client specific" as criterion
flowI don't think you want different avatars for different clients
jonas’I was coming from the "rarely-changing data in a stanza which is often sent is a waste of bandwidth" angle
flow(course, if you ask enough people, some people will say they want this…)
jonas’yeah, the per-client-ness is an argument pro presence
jonas’though we’ve already had enough arguments for the case that per-client caps are rarely useful and most of the time you’ll need something like an aggregated caps over all clients of the user (both min and max aggregates)
jonas’and those caps could be distributed by the server in a non-presence vehicle and also contain full caps hashes for the individual clients which are currently online.
flowyes, but, even if per-client caps are rarely useful, which I do not know if I would aggree, I do not see this as argument to remove them
flowof course, what we have discussed regarding per-account caps appears still desireable
flowand we should move towards that
jonas’even if per-client caps are useful (which they sometimes are, I agree), the question is whether they belong in presence
flowand maybe, just maybe, we will discovered that per client caps are no longer needed, but then they will probably vanish automatically
flowyes, but I am not sure if this is the question we should answer right now
flowit appears as something deeply baked into the core of xmpp
jonas’not really, it’s in '115
jonas’that’s not that deep
flowand since they rarely change, i feel like it is not worth any effort getting rid of them
flowbut if you want to work on a spec which puts those into something else PEP, then please do
flowbut if you want to work on a spec which puts those into something else (PEP?), then please do
jonas’no, that they rarely change is a reason to move them out of presence
jonas’if they changed "sometimes", presence would be a good place. if they changed "often", presence would be a terrible placce.
jonas’if they change "rarely", they are dead weight in most of the presence broadcasts which happen
jonas’(of course, if they change "often", they cause unnecessary presence broadcasts, which is arguably much worse)
jonas’flow, yeah, I’ve been pondering that for '390, which is why I take the opportunity to discuss this
flowahh you not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence
flowahh you are not worried about caps triggering additonal presence broadcasts, but the mere existence of caps in every presence
flowahh you are not worried about caps triggering additional presence broadcasts, but the mere existence of caps in every presence
flowtbh i never thought of this is something of a heavy burder
flowtbh i never thought of this is something of a heavy burden
jonas’it gets heavier when you introduce hash agility (like '390) and modern hash functions
flowI personally wouldn't invest time to improve here
jonas’stuff gets more and longer
flowtrue, but I do not think that we will change hashes often
jonas’I’m not so sure of that
flowsha1 has served us well for what? a decade or so?
jonas’and even if we don’t change them *often*, the transition period may well be a decade of sending two hashes with each presence
flowbut if I had to guess i'd say we see 4-5 caps variants per presence at most
jonas’which is quite a lot
flowwhich surely is not optimal, but something I could sleep with
flowjonas’, if you really want to reduce wire bytes invent an XML extension where the end element is always empty ;)
Syndacealright this is getting WAY to close to what we discussed for OMEMO just minutes ago, I have to jump in with something slightly off-topic
SyndaceWe have the problem that for OMEMO, you subscribe to the PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP nodes on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀
SyndaceWe have the problem that for OMEMO, you subscribe to a PEP node of each of the contacts you want to encrypt with. And then we're flooded with PEP updates on each connect, because PEP sends an update automatically on connect (right?). We were thinking about compressing stuff with EXI 😀
jonas’Syndace, EXI needs to be done on the stream level
flowSyndace, a common pattern is to split PEP data into a hash and the actual data
jonas’it would be cool if PEP services could do that
jonas’that would solve the race condition issues around that
SyndaceNah, EXI is just compression for XML, not talking about the EXI XEP but the EXI technology
flowthat way you only get the hashes on connect, which may already helps a lit
jonas’Syndace, EXI generates binary output though, I’m not sure you lose 99% of the advantages if you have to wrap it in base64 again.
SyndaceYeah that would actually help a lot
flownot sure if this is possible in your case though, would need to have a closer look
jonas’and if every client comes with an EXI implementation, we’re half way to being able to use EXI on c2s, which would be amazing
jonas’I hadn’t checked that far yet
jonas’I only checked if libxml supports it (which it doesn’t)
flowas much as like EXI, I am sceptic if this is the right solution to your problem
Syndaceflow I think that should be possible for OMEMO device lists, I'll mention it as a possible solution, thanks 🙂
SyndaceEXI could be used to compress bodies in general too, not only for the PEP node content
jonas’we should really discuss letting PEP services hash the contents of nodes
SyndaceAnd we encrypt the bodies as binary data anyway (SCE), so we don't have to base64 stuff there
jonas’but that’d require c14n, and nobody wants to go near that in XML :)
flowSyndace, thay them the OpenPGP XEP said hello (that is where the idea came from)
flowSyndace, say them the OpenPGP XEP said hello (that is where the idea came from)
Syndace...should really read the openpgp xep again 😀
flowjonas’> we should really discuss letting PEP services hash the contents of nodes
Syndaceyeah that would be awesome
flowa generic optional mechanism where the push only contains "a hash" would be nice
flowcould potentially be as easy as s/+notify/+notify-hash/
flowgreat, now I wanna write a new protoxep
flowwhen I actually wanted to go into the hammock
MartinYou can't do both? Write in the hammock?
jonas’flow, go ahead
Syndace> could potentially be as easy as s/+notify/+notify-hash/
damn that sounds soo good
larmaflow, why not versioning instead of hash
larmaafter all, a few hundred hash nodes that are the same as last connect also seeems kind of wasted...
jonas’larma, what would the advantage be?
larmajonas’, well if I have >100 contacts, each of them use(d) omemo,avatar,microblog,... and when connecting I +notify-hash, I still receive a few hundred hashes. And often enough those are just the same hash as last time I connected.
larmaWith some versioning scheme it could be done such that I only get the changes since last connect
jonas’I thought versioning per node
jonas’where you wouldn’t win anything over hashes
jonas’versioning per contact or globally (by your local server) would win of course
larmatrue, yeah was considering global versioning
ZashUh, what have I missed here‽
jonas’Syndace, you do realize that that immediately causes a loop? :)
jonas’ah, no, not a loop
jonas’but still terrible things™
Syndaceno I don't actually?
larmaSyndace, that wouldn't work because what would you hash, after all you receive multiple different nodes based on that notify
jonas’Syndace, it’s part of your ecaps hash ;)
jonas’it doesn’t cause a loop -- that was a mistake on my side -- but it still does fun things. since everytime you receive a pep update, your ecaps hash would change and you’d have to reemit presence
Syndaceoh right, you +notify for the node and not for each contact you want notifications from
Syndacejonas' I wasn't thinking about updating the hash during runtime, just setting it to the last hash you saw before disconnecting last time. Only to avoid the one single automatic update that you receive on connect.
lovetoxhm are you aware that +notify
lovetoxhm are you aware that +notify-hash would change your disco info
lovetoxjust saying that would flood me with disco info requests everytime something changes in pep
lovetoxand this brings me to the topic that +notify is really bad in disco info
lovetoxi cant deactivate a feature without changing my disco info
lovetoxbut on the other hand its really the most easy way to communicate to remote servers what you want hmm
Syndace"hash" means the word "hash" here, not any actual value. so you'd put "+notify-hash" in disco info exactly the same way you put "+notify" there now. so no disco info change everytime something changes in pep.
lovetoxthen i missed something
lovetoxhow does the server know on what version i am?
SyndaceI don't think there even was consensus on doing versioning
Syndacejust a simple hash of the node content
Syndacenothing more, nothing less
lovetoxwhere is the hash of the node contetn
lovetoxsent with the pep notification?
Zashhash of what?
Syndacehash of the pep node content
Syndacewhatever flow thinks of 😀
Syndace> sent with the pep notification?
yeah. instead of getting the content in the notification, you get a hash of the content.
lovetoxso instead of the actual data i get hundreds of pep notifications that contain a hash
Syndacereduced bandwidth and if you need to know the actual content you can manually query
lovetoxand i have to query more if its not the hash that i have?
Syndaceyup, instead of 100 device lists, 100 hashes
ZashIsn't something like this in XEP-0060?
lovetoxyes thats already in their
Zashnotifications without payloads at least
lovetoxits called omit the payload
Syndacehow does that work with the first update you get when you connect to the server and +notify?
lovetoxyou get a notification just with the item-id without payload
lovetoxthe item-id could be your version or hash
lovetoxbut really thats not worth it for something like a device list on omemo
ZashI've actually wondered why '84 doesn't just use payloadless notifications instead of a separate node
lovetoxthe payload is small anyway
Syndacelovetox well, the node can contain user-defined labels now
Syndaceso it can be a few times bigger than in legacy omemo
Syndacefor a few 100 contacts that adds up
Syndaceat least larma said that the device list notifications already make up a considerable portion of the traffic on connect
larmaIt's probably not the only thing, but it's definitely visible
larmahaven't actually calculated how much bytes it makes
lovetoxyou can reduce the payload
lovetoxbut this does not change the fact that pubsub in its current form is just not scaleable
lovetoxit works nice until you reach X users in your roster
lovetoxthen it becomes a burden
larmayou mean pep, not pubsub
lovetoxpep is a subset of pubsub
Syndacepayloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though, like a hash of the content
Syndacewe use "current" at the moment
lovetoxthen you need to configure the node to max-items=1
Syndaceand do payloadless notifications work with PEP notifications?
lovetoxwhich we sidestep with "current" right now
lovetoxSyndace, its a node configuration, and you can configure the default node just to enable it
lovetoxbut this would probably break every client
Syndacesounds like something that might be a solution for OMEMO but I don't know enough about PEP/pubsub to push that idea forward
lovetoxwhat we really would need is a smart idea how we can avoid notifications all together if we already received it
lovetoxSyndace, you say solution like there is a problem
Zashserver-side device tracking?
lovetoxomemo and all other pep based xeps work fine
lovetoxits just not scaleable indefinitely
Zashit doesn't have to tho, humans don't scale that well either
Syndace"problem" is a strong word, but e.g. we don't put the ik into the device list because it's too big, so you have to query the bundle manually for every new device.
lovetoxopenpgp puts the ik into the notification
Syndaceand if everybody sets a huge label for all of their devices, you'll notice the traffic probably
lovetoxso its not like it isnt already done
lovetoxif the payload gets to big, you do what the other xeps do
lovetoxadd a metadata node
lovetoxthat tells you only the current version
Syndaceisn't that exactly what the hash approach would do?
lovetoxyes, my example can be implemented and works tomorrow
lovetoxyours need support in X server implementations first
lovetoxand the result is the same, one is just more elegant
SyndaceI think we're drifting away
lovetoxwhy? its exactly what you want, you subscribe to a metdatanode, it always contains only the last hash or version
lovetoxand you define in the xep, if the version or hash is outdated, you request the payload node
Syndaceyeah sure, we talked about that for OMEMO
SyndaceI don't know why we're reiterating it now
lovetoxok if im saying it now, i dont really see where the server could even help us here
SyndaceThe server could create the hash for us on-the-fly, without the need for an extra metadata node
lovetoxbut the extra node is on the server
lovetoxfor the client nothing changes
Syndacethe client has to update the metadata node though
lovetoxhe gets a notification with the hash, and requests afterwards a node
Syndacewhen it publishes something
lovetoxyeah true it has to publish 2 things
lovetoxinstead of one
lovetoxhardley worth a new xep and serverimpl though if you ask me :D
lovetoxits not like you publish daily devicelists
Syndaceif you do it manually, every XEP has to do it manually. If you can just subscribe to #notify-hash, every client can decide to do it without the XEP even mentioning the possibility
Syndace> its not like you publish daily devicelists
the problem is still the PEP update spam you get on connect
lovetoxyeah true, as i said its a bit more elegant
lovetoxSyndace, you also get a pep update spam with notify-hash
Syndaceyes, but (in many cases) less :)
lovetoxyes as it would be if you use a metadata node :)
Syndaceless as in less bytes, not fewer updates
lovetoxbut ok, if the server does it for us it indeed elegant for the client
Syndaceyeah. And it's easier than payloadless (is it?), because we can keep using one item with id "current" and don't have to rely on max-items (why not?).
lovetoxthe problem with payloadless is its a configuration on the node
lovetoxso you have to get all servers to have this configured
lovetoxwhich does not make much sense in other cases
ZashHm, I wondered why something wasn't (also) a subscription option. Maybe this was it.
Syndacebut the device list is its own node, isn't it? so you could just set that for the device list?
ZashDid you not kill the 1+n node scheme?
lovetoxSyndace, node configuration is not nice with client
Syndacewe have two nodes, one with the device list and one with the bundles
lovetoxfirst you have to pull the node configuration, then you have to set a new one
lovetoxthen you have to publish
Syndacethe bundles used to be split into n
lovetoxthis is theroretically possible with publish_options, server support this only partly
Zashsingle device id item or one per device?
Syndacetwo nodes in total
Zashlovetox, easily solvable, just make the Conversations Compliance checker cry loudly about it
Syndaceah items, yeah one item with the list
Syndacelovetox we already require setting pubsub#max_items
Syndaceso might also require the other thing
SyndaceZash, I think PEP only notifies about the newest item? That's why we want the whole list to be one item.
ZashAnother unshaved yak :/
lovetoxSyndace, max items is supported by publish options on most servers
lovetoxother node configurations are not
ZashIf you could somehow ensure that you get all of the items, it'd be cleaner
Syndace(the Why was @Zash)
lovetoxbut yeah if the option is not set, its not bad
lovetoxthen you get the payload
ZashAnd then you could use retractions to indicate device removal
lovetoxretractions are not sent on coming online
lovetoxthe one thing per item approach is good for stuff on your own server
flowso yes pubsub#deliver_payloads would be the way to go, the wiki page has a note about that feature being not discoverable though
Syndacecool! thanks for the link.
flowI think I had the split metadata and data scheme in mind becaue that is what works with any minimal PEP implementation
flowZash, searching for devlier_payloads in 84 yields no results
Zashit has split metadata and data tho
flowand since I don't have any detail of every protocol in mind, I would appreciate what exactly
flowI also lookinto into my notes and found a todo item regarding OK about deliver_payloads
flowI also looked into into my notes and found a todo item regarding OK about deliver_payloads
flowSyndace> payloadless notifications actually sound pretty cool, we'd have to set the item id to something with meaning though,
Do you really have to set the ID explicitly? Often it is enough to go with the pubsub service generated one
flowsoo good news, I don't have to write a protoxep, xmpp already provides what we need, we just have to implemented it in services and clients
flowand I can go in my hammock
flowZash, actually I wonder if that split should be declared an anti pattern
lovetoxbefore you consider something an anti pattern you should at least provide a different approach to reach the same goal
Zashflow: Mmmm, borderline. I personally think the (old) OMEMO thing with 1+n nodes was worse. But if it works, it gets deployed.
Syndaceflow actually there is a small but meaningful difference between +notify-hash and payloadless: payloadless has to be configured on the node while +notify-hash can be used on any node if the client wants to
Syndace+notify-payloadless would be amazing too
ZashYou could invent that
ZashWould be easier if it was implemented as a subscription option tho :/
Zashand specced as one
Syndaceand payloadless should probably be made optional with a disco feature to reflect the current state of server implementations
ZashIs there a feature for it?
Syndace> the feature is not discoverable, most likely because it appears to be mandatory by XEP-0060
ZashIt there a feature for `pubsub#deliver_payloads` I mean
Syndaceif https://xmpp.org/registrar/disco-features.html is the list of features then no, can't find anything for "deliver_payloads"
SyndaceAnyway, the situation is quite clear, we can't rely on any of that for OMEMO. If we want to reduce the on-connect PEP update traffic, we have to manually specify some sort of metadata node.
ZashAccount level subscriptions + MAM? :)
SyndaceAny I think we agreed that it's not worth the effort given that the device list node is rather small generally
SyndaceI don't think a hard dependency on MAM is a good idea just for that
Syndacehow does account level subscription work? you subscribe using '60 instead of +notify and then you receive updates as messages that are stored in MAM while you're offline?
ZashSyndace, you subscribe your account, notifications are sent there and could /in theory/ be forked (instead of the origin sending to each +notify) and MAM'd
ZashIn practice those notifications will just be discarded because type=headline
ZashCould be solved by some future magic XEP probably
ZashIM NG might help actually
ZashAlso possible to configure notifications to be sent as type=chat, but that's a node config, not subscription option :(
ZashMore of these fancy things as subscription options would be awesome
ZashSo each subscriber decides
flowSyndace, yes, but do most xeps, including OMEMO not already specify how nodes should be configured? so I am not sure about how meaningful the difference is in this case
flowI think we should probably tackle this from two angles: configuring the node to not deliver payloads *and* invent +notify-payloadless
larmaflow, how would you introduce +notify-payloadless to a federated network?
Zashlarma, haha .. :(
larmawell the problem is that it's not relying on your server to be updated, but on every server to be updated
Zashlarma, you do both +notify and +notify-payloadless and the receiver needs to ignore the former?
larmaSo as a client I get mixed responses, sometimes payloadless and soemtimes not?
larmadepending on what the other ends servers supported
flowwell ideally the node is also configured to no deliver payloads
flowI actually think that this should be enough
ZashAlternatively, mandate local server magic
ZashYour server could easily hash the content (/me laughs in xml c14n) and forward you the hash
flowZash, I don't think c14n is relevant here
larmaIf we do local server magic, I'd rather go full server magic and do global pep versioning
ZashI'm confusing the hash stuff with the payloadless stuff
ZashShould be trivial for a server to strip the payload
floweven there, do not think of it as a hash, but an Abstract (PubSub) Node Data ID
flowfor which is true that if the node data changes that abstract id changes to
ZashI thought I saw some talk about hashing the payload data somehow
flowbut it is not important that "similar" xml leads to the same id
flowinfact, if the data does not change, but there is a new id, that would be fine too
flowone could even implement that abstract (PubSub) Node Data ID as counter
flowi.e. it is the same id that xep395 would use
ZashAnyways, a server seeing a pubsub notification that includes a payload, but the client has set +notify-payloadless then it should be easy to strip the payload and forward the rest
ZashIIRC payloadless notifications still include the id, so if you stick a magic hash there you should be golden
flowwhy is sticking a magic hash in there important?
flowcouldn't you just use the, you know, item id?
larmaflow, if item id is just 'current' all the time, that's not very helpful
flowlarma, it has not to be that way
flowisn't current just because singleton node?
flowisn't 'current' just because singleton node?
larmayeah, but then taking a hash instead of a random number is a good idea, because changing back and forth will result in same id so no unnecessary requests for those that were not online in between
flownow the question is if it is possible to keep the singleton semantic but have differnt IDs for different items, which seems desireable anyway
Zashyes, but you need max_items
flowlarma, it is a good idea without doubt, but it is not strictly required
flowZash, and we do not have max_items? or is there no issue?
ZashI think we do, but there's some extra care involved
ZashOlder prosody doesn't, but also doesn't support >1 items so it's fine.
flowso, use max_items=1, delivery_payloads=false, service generated item id, → $$$
larmacan't we just build mam for pubsub instead, maybe using some shorthand +notify-archive which will cause the server to automatically subscribe to nodes and deliver updates from the archive when connecting?
Zashlarma, any question including the word "just" automatically get "no" for an answer
ZashIt's never "just" anything :P
flowlife would be boring if things where that easy
larmaIt's not about doing something that's easy
larmaIt's rather about doing something that's meaningful
Zashlarma, pubsub+mam has been mentioned in the past as the glorious saviour of everything, but it's lacking in actual specification or somesuch
Zashlacking in "how should it even work?"
ZashI think MattJ had some stuff to say about it recently
`<item id='current'><devices><device id='1' /><device id='2' /></devices></item>` with
`<item id='b5fec669aca110f1505934b1b08ce2351072d16b' />` isn't really a huge improvement IMO
ZashHow many phones and computers do normal people even have?
larmaSure it's some improvement, but it still means O(n*m) traffic on connect (n = number of contacts, m = number of features that use pep)
larmaI alsways calculate with 3, but I feel it's probably rather 1.7 or something
larmamany don't even use their notebooks/desktops for messaging at all
ZashI heard computer sales was picking up because everyone needed to work from home :)
larma"I was sending SMS from my phone for the last 25 years, I'll continue to do so"
lovetoxthats what i said earlier, the whole idea makes it a bit more efficient, but in the whole great scheme of things, where everybody stuffs all into pep, it does not really matter
lovetoxbut nevertheless it would be more elegant, and xeps would not need to define metadata nodes anymore
Zashlovetox, have you heard of https://en.wikipedia.org/wiki/Jevons_paradox ? :)
lovetoxno i did not, but now i know
Syndaceshould not forget in the size comparison that there are labels
Syndaceand clients will probably set default labels of a few chars
Syndaceoptional labels (=strings) for devices
Syndaceto make it easier to identify keys
Syndacee.g. Gajim could set "Gajim on Windows" for its default label