promotional bannermobile promotional banner

Loothing

A modern loot council addon for WoW 12.0+ that enables guild masters and officers to manage, distribute, and track raid loot through collaborative council voting, powered by the Loolib framework.

File Details

Loothing v2.0.26

  • R
  • Apr 22, 2026
  • 990.30 KB
  • 27
  • 12.0.5+1
  • Retail

File Name

loothing_2.0.26.zip

Supported Versions

  • 12.0.5
  • 12.0.1

The big comm-hardening pass prompted by a real raid where the queue backed up to ~700 messages, 80 % of session-data broadcasts were dropped, and the addon still thought it was DISCONNECTED after leaving the raid into a residual party. This release rebuilds the comm layer to live inside WoW's actual send budget instead of fighting it, collapses the ML broadcast storm that caused the pressure in the first place, and fixes the stuck-DISCONNECTED transition that stranded queued messages forever.

Fixed

  • No more "stuck DISCONNECTED" after leaving a raid into a party. When you left a 20-man raid that converted back into a pre-existing 4-man party, GROUP_LEFT was firing and the comm state machine unconditionally flipped to DISCONNECTED — but GROUP_JOINED never fired for the still-present party, so the state never recovered. Every outbound send was silently dropped, the queue filled with stale raid-addressed messages, and /lt diag showed Comm State: DISCONNECTED while In Group: true. OnGroupLeft now checks IsInGroup() before demoting and stays CONNECTED when a group still exists, and a new GROUP_ROSTER_UPDATE safety net recovers from any event-ordering race that still lands us in DISCONNECTED while in a group. When we do go truly disconnected, the Loolib transport queue is flushed for our prefix so 277 stale party-addressed messages don't sit forever waiting for a recipient that left.
  • ML broadcast storm is gone. A single 20-man raid session was sending 31× COUNCIL_ROSTER + 31× MLDB + 31× SESSION_START + 31× OBSERVER_ROSTER — ≈120 burst broadcasts of whole-state messages where only the most recent value matters. Every GROUP_ROSTER_UPDATE (fired frequently in a raid) was re-sending all four, and every sync request from a reloading raider triggered another cycle. Each broadcast helper now dirty-checks its payload against the last send: the MLDB broadcast serializes settings and skips the send if the bytes match the prior broadcast, council / observer rosters hash their contents and skip when unchanged, and SESSION_START keys off the current sessionID. New members joining mid-raid still receive state because the GROUP_ROSTER_UPDATE-driven rebroadcast path now passes an explicit force=true through all four helpers.
  • Session-data drops during raid sessions. Under heavy queue pressure the existing backpressure logic was downgrading NORMAL to BULK and dropping BULK entirely, which killed 126 of 158 session-data sends (an 80% drop rate) in the reference incident. With the storm gone, pressure stays low; session-data, votes, autopass responses, and item additions no longer get starved behind a queue stuffed with redundant broadcasts.
  • Autopass not firing in raid. Autopass is sent as PLAYER_RESPONSE, which is a critical priority — but when the outbound queue was saturated with the broadcast storm, even critical messages were waiting minutes to reach the wire. With the storm eliminated and the queue living in WoW's actual send budget, autopass responses now flow at the rate the game expects.

Changed

  • Send rate now matches WoW's actual addon-channel budget (≈10 burst + 1 msg/sec). The prior byte-rate throttle at 800 B/s / 4 KB burst would allow small-message bursts of 15–20 messages into a channel that WoW would then throttle at the server side. A new message-count token bucket (8 tokens burst, 1 token/sec refill, 2-token headroom for DBM / WA / BigWigs on the same shared channel) now paces sends regardless of message size. When the bucket is empty the drain loop yields; the bypass-hook for other addons' sends decrements the same bucket so we don't over-send into a channel the rest of the client is already using. The 500-item queue ceiling is kept as a sanity cap, but drain is now governed by the rate model, not the ceiling.
  • Outbound-queue coalescing for idempotent whole-state messages. When a new MLDB_BROADCAST, COUNCIL_ROSTER, OBSERVER_ROSTER, HEARTBEAT, SESSION_INIT, or VERSION_RESPONSE is enqueued and an earlier send for the same (prefix, key) is still queued but has not yet begun transmitting, the earlier item is marked superseded and the drain skips it. Rapid-fire state pushes (common during settings edits, ML handoffs, roster churn) collapse in the queue instead of bursting onto the wire. Vote / response / lifecycle / batch / per-item messages are explicitly excluded from coalescing — those remain append-only. A Coalesced: N counter in /lt diag reports how many superseded items were dropped silently.
  • Sync-request responses now broadcast to the group once when ≥2 raiders ask, instead of whispering each one. Post-reload "hey, what state are you in?" bursts used to trigger ML to whisper full SYNC_DATA to each requester — a 20-reload wave cost 20 whispered responses in ML's send budget. The responder-side coalesce window (already 2 s in code) now broadcasts once to the group; non-requesters drop the payload at the syncInProgress guard on receive. Single-requester case still whispers, preserving targeted semantics.

Added

  • Decode-error ring buffer in /lt diag. Previously the diagnostic dump showed Protocol Errors: 6 (checksum: 0, decode: 6) with no way to tell what failed. A new bounded ring buffer (last 20 decode failures) records timestamp, sender, reason (empty/too_short/decompress/checksum/deserialize), byte count, and distribution channel for every failed Protocol:Decode. The dump now includes a Recent Decode Errors section when the ring has entries, so the next time decode failures spike the cause is visible instead of just the count.
  • Send-budget visibility in /lt diag. A new Send Budget: X.X/8 tokens (coalesced: N) line reports current token-bucket state and coalesce-skip counter, so it's obvious at a glance when the rate limiter is constraining sends vs. when everything's flowing.