Ascend Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: (ASCEND) MAX's & Multiple login control





On Sun, 4 Jan 1998, Norm wrote:

> >>
> >>BTW:  Who ever it was that wanted to stop multiple logons check out:
> >>http://www.nettally.com/sentry
> >>
> 
> I played with this. It was interesting. The author was very responsive for
> about a week and fixed things daily. But, since then I have yet to get a
> reply to any email from him. He has not made any changes to the software
> since then either (it still has some major bugs). I'm very glad we didn't
> spend the (yes over priced) money on it.
> 
> Norm
> 
> >One of my guys here has offerred to do similar for linux/freebsd at
> >much less than $USD249 which quite honestly I feel is close to being
> >a ripoff.

The way we do it involves some involved radius patching-- we keep a utmp
in dbm form in our radius accounting server.  We prefix the session ID
with the hex value of the IP of the NAS, then maintain a start-session
and stop-session dbm with all the session ID's we receive, so we don't
duplicate entries when we receive duplicate start/stop packets.  With
accounting checkpointing every 3 minutes, we get updates on bytes xferred,
data rate, etc.  It's been a lot of work (mine and Matt Dwyer's here at
Dreamscape, who did a lot of the DBM work), but we both like digging into
the radius code, so it continually grows.  We basically just list our utmp
of connected sessions, and run it through a tiny perl script to list the
lines that have multiple entries by userid.  Since it works by radius
updates, we don't have to do any work, we always have an up to date utmp.
The only problem is when a max crashes without sending radius stop
packets- but our lines hunt in longest-idle-first mode, so when this event
happens, it doesn't take too long for the old entries to be overwritten by
real ones.  We also use a postgres SQL database server, and have hacked in
some code to use that, but it isn't yet an integral part of our radius
server.  Probably the best way would be to use an sql server to store all
this information, and then you could construct some pretty impressive sql
queries to list online users, multiply logged users, total online time,
etc.  We also store a wtmp-like log, which is just a flat file with raw
structures with the relevant stop-packet data, and have developed
utilities to go through this log (it's 300MB which goes back to august 97,
something like 54 bytes per logout) and selectively display information by
user or port.  But that doesn't scale well- I've had to re-write our
utililty for displaying this file a couple of times to deal with
performance issues (it cruises by pretty fast now).  It's also fairly well
customized for our own conventions on port designations, so it wouldn't
transfer too well to another system.

This is for maintaining a 9,000 user system spanning about 12 cities.

Personally, I don't think $250 is worth it.  Call me self-interested, but
the best way (IMHO) to handle multiple logins is not to enforce it in the
server, but to use the radius server to maintain a utmp, scan it once
every 5 minutes for multiple logins, log it, and then go through the log
once a day and email anyone with over a certain number of logged multiple
entries.  (We once found an account that a Dr. had shared with his entire
staff logged in multiply over 7 ports at once, and had a nice talk with
him on the phone about what $18.95/month really gets him.)  But then
again, maybe I'm just too nice to people.

-Will
willp@dreamscape.com

P.S.  If you're interested in the source to the web utilities I've got--
it's still in the works...  Hang tight.

++ Ascend Users Mailing List ++
To unsubscribe:	send unsubscribe to ascend-users-request@bungi.com
To get FAQ'd:	<http://www.nealis.net/ascend/faq>


References: