[Prism54-devel] islpci_mgt_transaction and stability
Denis Vlasenko
vda at port.imtp.ilyichevsk.odessa.ua
Sun Aug 8 20:32:09 UTC 2004
Hi Luis, folks,
I am doing some stability testing. I instrumented
driver to report each mgt transaction:
int
islpci_mgt_transaction(struct net_device *ndev,
int operation, unsigned long oid,
void *senddata, int sendlen,
struct islpci_mgmtframe **recvframe)
{
...
printk(KERN_DEBUG "islpci_mgt_transaction(op %d, oid %08x, sendlen %d)\n",
operation, oid, sendlen
);
I have two prism54 in STA<->AP setup and I positioned them so that link is bad
and they frequently lose assoc. Which is easy to do, hardware is rather crappy
RF-wise ;). Then, I fired UDP flood in both directions.
Typical throughput numbers can be found below the sig.
But I noticed one interesting thing. Whenever link is restored,
I see lone "islpci_mgt_transaction(op 0, oid 10000001, sendlen 6)"
in syslog. What that can be?
oid_mgt.c:
OID_STRUCT_C(DOT11_OID_BSSID, 0x10000001, u8[6], OID_TYPE_RAW),
I wonder, is this the result of (re)assoc happened?
Or maybe it is other way around and this transaction kicked hw so
that hw noticed the problem and reassociated?
This oid is referenced in three places only, two of them are ioctls,
I did not run anything which could invoke them (like iwconfig).
The only one left is this:
void
prism54_update_stats(islpci_private *priv) {
...
/* First get the MAC address of the AP we are associated with. */
mgt_get_request(priv, DOT11_OID_BSSID, 0, NULL, &r);
Which is a wq function:
/* initialize workqueue's */
INIT_WORK(&priv->stats_work,
(void (*)(void *)) prism54_update_stats, priv);
wq is fired only from:
struct iw_statistics *
prism54_get_wireless_stats(struct net_device *ndev) {
....
/* Update our wireless stats, but do not schedule to often
* (max 1 HZ) */
if ((priv->stats_timestamp == 0) ||
time_after(jiffies, priv->stats_timestamp + 1 * HZ)) {
schedule_work(&priv->stats_work);
priv->stats_timestamp = jiffies;
}
which is called via this pointer:
ndev->get_wireless_stats = &prism54_get_wireless_stats;
Strange. I could swear I did not run anything which might call that.
More, I can sort of prove it:
Aug 8 22:15:59 shadow : [SSSSSS....] x 21k p 31 m 71M if 2056 544 lo 1728 1728
Aug 8 22:16:06 shadow kernel: islpci_mgt_transaction(op 0, oid 10000001, sendlen 6)
Aug 8 22:16:36 shadow last message repeated 2 times
Aug 8 22:17:34 shadow last message repeated 8 times
Aug 8 22:17:54 shadow last message repeated 4 times
Aug 8 22:17:59 shadow : [SSSS......] x 24k p 4 m 71M if 2056 544 lo 1152 1152
No tool asks BSSID only. For example, iwconfig queries ~20 oids
at once.
But I digress. My point is, maybe polling this oid may
help hardware to recover from catatonic states which it seem
to like to fall into?
Okay, I shall attach throughput log, send this mail
and go investigate AP which is committed act of sabotage:
it timed out trying to write oid ff020003 (OID_U32_C OID_INL_MODE), and:
ifp: resetting device...
ifp: uploading firmware...
default.hotplug[18239]: invoke /usr/app/hotplug-2004_01_05/script/firmware.agent ()
firmware.agent[18239]: /usr/app/hotplug-2004_01_05/firmware/isl3890 found, loading it into /sys//class/firm
default.hotplug[18254]: invoke /usr/app/hotplug-2004_01_05/script/firmware.agent ()
ifp: islpci_upload_fw: success
ifp: got 'reset complete' IRQ
but nothing more. This should follow, but didn't:
islpci_mgt_transaction(op 1, oid ff020008, sendlen 4)
...
islpci_mgt_transaction(op 0, oid 00000000, sendlen 6)
islpci_reset_if: success
islpci_reset: success
Crap.
--
vda
22:32:26 cpu [SSSSSSSSSS] ifp 0 42
^^^^^^^^ ^^^^^^^^^^^^^^^^ ^^^ ^^^ ^^^
time load iface in out (in bytes)
STA lost assoc:
22:32:27 cpu [SSSSSSSSSS] ifp 0 42
22:32:28 cpu [SSSSSSSSSS] ifp 0 42
22:32:29 cpu [SSSSSSSSSS] ifp 0 42
22:32:30 cpu [SSSSSSSSSS] ifp 0 42
22:32:31 cpu [SSSSSSSSSU] ifp 0 42
22:32:32 cpu [SSSSSSSSSS] ifp 0 42
22:32:33 cpu [SSSSSSSSSS] ifp 0 42
22:32:34 cpu [SSSSSSSSSS] ifp 0 42
associated, UDPs are fired:
22:32:35 cpu [SSSSS.....] ifp 60 47k
22:32:36 cpu [SSS.......] ifp 60 21k
22:32:37 cpu [S.........] ifp 0 41k
22:32:38 cpu [SU........] ifp 108k 43k
22:32:39 cpu [..........] ifp 252k 21k
22:32:40 cpu [..........] ifp 277k 0
22:32:41 cpu [..........] ifp 278k 65k
22:32:42 cpu [..........] ifp 226k 37k
22:32:43 cpu [SS........] ifp 256k 44k
22:32:44 cpu [..........] ifp 284k 56k
22:32:45 cpu [..........] ifp 211k 84k
22:32:46 cpu [..........] ifp 267k 46k
22:32:47 cpu [..........] ifp 342k 25k
22:32:48 cpu [S.........] ifp 271k 56k
22:32:49 cpu [S.........] ifp 250k 17k
22:32:50 cpu [..........] ifp 185k 73k
22:32:51 cpu [..........] ifp 267k 9238
22:32:52 cpu [..........] ifp 260k 43k
22:32:53 cpu [..........] ifp 204k 13k
22:32:54 cpu [S.........] ifp 60k 89k
22:32:55 cpu [..........] ifp 99k 55k
22:32:56 cpu [..........] ifp 102k 70k
22:32:57 cpu [..........] ifp 93k 72k
22:32:58 cpu [..........] ifp 70k 61k
22:32:59 cpu [SS........] ifp 70k 76k
22:33:00 cpu [..........] ifp 113k 43k
22:33:01 cpu [..........] ifp 83k 68k
22:33:02 cpu [..........] ifp 99k 51k
22:33:03 cpu [..........] ifp 92k 63k
22:33:04 cpu [SS........] ifp 101k 67k
22:33:05 cpu [..........] ifp 97k 85k
22:33:06 cpu [..........] ifp 102k 42k
22:33:07 cpu [..........] ifp 99k 64k
22:33:08 cpu [..........] ifp 88k 72k
22:33:09 cpu [S.........] ifp 98k 93k
22:33:10 cpu [..........] ifp 70k 75k
22:33:11 cpu [..........] ifp 68k 84k
22:33:12 cpu [..........] ifp 70k 82k
22:33:13 cpu [..........] ifp 22k 32k
trouble with assoc:
22:33:14 cpu [SS........] ifp 0 578k
22:33:15 cpu [..........] ifp 0 0
22:33:16 cpu [S.........] ifp 0 332k
22:33:17 cpu [..........] ifp 0 0
22:33:18 cpu [..........] ifp 0 0
22:33:19 cpu [SSU.......] ifp 0 1.9M
22:33:20 cpu [SS........] ifp 0 6.0M
22:33:21 cpu [SSS.......] ifp 0 6.5M
22:33:22 cpu [SSS.......] ifp 0 4.8M
22:33:23 cpu [SSSS......] ifp 0 1.9M
22:33:24 cpu [SSSSU.....] ifp 0 3.7M
22:33:25 cpu [SSS.......] ifp 0 6.5M
22:33:26 cpu [SS........] ifp 0 6.6M
22:33:27 cpu [SS........] ifp 0 6.5M
22:33:28 cpu [SS........] ifp 0 6.6M
22:33:29 cpu [SS........] ifp 0 6.5M
22:33:30 cpu [SSSS......] ifp 0 6.6M
22:33:31 cpu [SS........] ifp 0 6.5M
22:33:32 cpu [SS........] ifp 0 6.5M
22:33:33 cpu [S.........] ifp 0 1.3M
22:33:34 cpu [..........] ifp 0 393k
arp cache expired, UDP stopped, arps are sent:
22:33:35 cpu [S.........] ifp 0 0
22:33:36 cpu [..........] ifp 0 0
22:33:37 cpu [SSSSSSS...] ifp 0 42
22:33:38 cpu [SSSSSSSSSS] ifp 0 42
22:33:39 cpu [SSSSSSSSSS] ifp 0 42
22:33:40 cpu [SSSSSSSSSS] ifp 0 42
22:33:41 cpu [SSSSSSSSSS] ifp 0 42
22:33:42 cpu [SSSSSSSSSS] ifp 0 42
22:33:43 cpu [SSSSSSSSSS] ifp 0 42
22:33:44 cpu [SSSSSSSSSS] ifp 0 42
22:33:45 cpu [SSSSSSSSSS] ifp 0 42
22:33:46 cpu [SSSSSSSSSS] ifp 0 42
22:33:47 cpu [SSSSSSSSSS] ifp 0 42
associated:
22:33:48 cpu [SSSSSS....] ifp 9978 68k
22:33:49 cpu [SUUUUUUUU.] ifp 100k 91k
22:33:50 cpu [SSSUUUUUUD] ifp 98k 79k
22:33:51 cpu [SSSSSSSSSS] ifp 83k 95k
22:33:52 cpu [SSSSSSSSSU] ifp 109k 79k
22:33:53 cpu [SSSSSUUUU.] ifp 120k 49k
22:33:54 cpu [..........] ifp 92k 69k
22:33:55 cpu [SU........] ifp 58k 73k
22:33:56 cpu [SSU.......] ifp 86k 64k
22:33:57 cpu [..........] ifp 103k 62k
22:33:58 cpu [..........] ifp 73k 64k
22:33:59 cpu [..........] ifp 91k 60k
22:34:00 cpu [SUU.......] ifp 157k 44k
22:34:01 cpu [SU........] ifp 128k 16k
trouble again:
22:34:02 cpu [SSSSSSSSU.] ifp 0 0
22:34:03 cpu [SSSSUUU...] ifp 0 0
22:34:04 cpu [SUUUDDD...] ifp 0 640k
22:34:05 cpu [SU........] ifp 0 495k
22:34:06 cpu [SSU.......] ifp 0 0
22:34:07 cpu [U.........] ifp 0 5376
22:34:08 cpu [SU........] ifp 0 327k
22:34:09 cpu [..........] ifp 0 14k
22:34:10 cpu [..........] ifp 0 264k
22:34:11 cpu [S.........] ifp 0 0
22:34:12 cpu [..........] ifp 0 0
22:34:13 cpu [..........] ifp 0 375k
22:34:14 cpu [..........] ifp 0 0
22:34:15 cpu [..........] ifp 0 0
22:34:16 cpu [SS........] ifp 0 461k
22:34:17 cpu [..........] ifp 0 0
More information about the Prism54-devel
mailing list