i understand it for specific iqs, where i want to give the user within a timeframe a response
lovetox
Zash, it cant be forever, because on a non-sm-resume reconnect, all callbacks are invalidated anyway
lovetox
and its bound to happen
lovetox
hm that gets me thinking, iq request are not really bound to a session
Zash
To the full JID, ish.
lovetox
ah yeah
lovetox
that was it
lovetoxhas left
xeckshas left
jonas’
lovetox, placing limits on things is always good
jonas’
unbounded memory consumption always bad
Link Mauve
lovetox, poezio previously didn’t timeout iqs, wishing that every other entity on the network respected this MUST in the spec, and that no stanza would ever be lost over the network.
Link Mauve
But with 22k iq handlers in flight due to remote entities not doing that, it made poezio very slow.
Zash
Nobody said what time frame the reply must be returned in...
jonas’
particularly great when there’s a remote way to make poezio send a lot of IQs :)
jonas’
to a near-arbitrary address
jonas’
which can be made to blackhole stuff
Link Mauve
Now that we have a (IIRC) two minutes timeout, poezio stays fast for much longer.
DebXWoodyhas joined
Kev
> lovetox, placing limits on things is always good
You say that, but I have seen plenty of issues with different clients putting in arbitrary timeouts because they assume they'll always be used on the same sort of network connection as the author was using.
Kev
So, yes, not memory-exhausting yourself is good, but you have to be tremendously careful while doing it if you don't want to break things for someone.
jonas’
yes
Ge0rG
Kev: there is never the right default value.
Kev
Quite.
Ge0rG
Kev: you might have designed for a 75 bps military satlink, but I'm working on German mobile "broadband".
Kev
Although setting iq timeouts to something like 10 minutes is probably safe enough.
Kev
I have seen situations in which 5minutesish wouldn't have been.
Zash
Due Linux kernel TCP default timeouts, and lack of happy eyeballs, it can take way longer than 2 minutes just to get s2s up.
Zash
~90 seconds per attempt or somesuch.
Kev
The real best default is "Whatever the maximum length of time is before I can't possibly avoid timing out without failure* locally"
Zash
Or you could set a memory budget instead. If whatever state you have takes more than x size, time out the oldest one.
Zash
And then like, don't disco#info everyone in those 10000 user MUCs
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
lovetoxhas joined
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
lovetox
memory consumption for a dict wit iq: callback?!
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
lovetox
not sure on what machines you are, but i could probably store a billion and it would still not really noteable
kikuchiyohas joined
kikuchiyohas left
Zash
define "billion"
lovetox
ah a billion is probably too much
kikuchiyohas joined
lovetox
but you get the idea
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
kikuchiyohas left
kikuchiyohas joined
jonas’
it soon becomes more than just id: callback; the callback will often have some kind of closure associated to provide more context to the reply
Zash
2Ă—64 bit pointers minimum per dict entry or so?
jonas’
also hashmap overhead
Zash
yes, hence "minumum"
xeckshas joined
Zash
And 128 bit UUIDs
Zash
... usually encoded as 36 byte strings (+\0 and/or length)