Ascend Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: (ASCEND) NAT, Pipe75, stops routing.
"Kevin A. Smith" <kevin@ascend.com> writes:
> In the meantime, the more information we get to TAC on specific problems, the
> quicker we'll get everyone up and running.
Kevin: Here's my observationas of the NAT problems you've been
studying in this thread. I've spent significant time on customer sites
and on our network trying to nail down this problem as much as I can.
Routers affected: E1/V35 versions of P130, P220 at least
(I have not looked at the p75/85).
Software: b.p13 (5.1Ap4, 5.1Ap9, 6.0.0),
ea.p22 (6.0b4, 6.0.4e5)
Environment: Pipeline 130/220 with approx. 20-30 users on LAN,
connected via nailed V35 frame relay to a GRF400. WAN i/fs on
pipeline/GRF share a /30 network, with typically a private address
space range on the pipeline LAN and an FR address set so that all of
the private addresses get translated to a routable public address.
Problem: Router regularly gets into a state where it is unable to
create anymore outgoing dynamic NAP maps. Incoming static maps remain
operational and so unfortunately the monitoring software I have (which
periodically telnets to the router) is blissfully ignorant of the
problem).
Output from NAPT shows that the NAT maps holds 500 entries: all are in
use, none are expired, ~95% are TCP. AIUI the NAT map for TCP and UDP
sessions idles out after 24 hours (which DNS lookups being a special
case). Whenever I've seen the problem, a good proportion of the NAT
maps in the table have a ttl of at least 23 hours (ie. they've just
recently been created and won't die for another day).
When NATTOGGLE is on, attempting to establish another outgoing
session that requires a dynamic map set up, results in the pipeline
logging errors (it seems to dump the current map).
A typical observation for me is perhaps only 4 or 5 unique
source IP addresses occur in the NAT map, ie. I'm seeing the pipeline
holding upto 80-100 maps per IP on the LAN.
Diagnosis: My users are managing to crash their Windoze boxes badly
while a large number of TCP sessions are open across the pipeline. The
pipeline never sees a RST or FIN for these TCP sessions and so is
forced to let the NAT map idle out rather than deleting it. A
significance in at least one of my cases, is that almost all outgoing
connections are all to the same host: an Irix web proxy server (not
that I'm trying to attribute blame, you understand :).
Another possibility that strikes me is that the configuration of
Secure Access (some of our Pipes runs a firewall as well as NAT -
although this seems fundamentally broken in 6.x at present :<) is
preventing NAT from seeing the necessary connection closures it needs
to delete the NAT maps. I'm grasping at straws here! :)
It would be nice if the size of the NAT table was a paramater under
the NAT settings. It would also be nice if the idle timeout for the
NAT maps was a tweakable paramter. And of course an emrgency 'flush
nat table' command on the diags screen would be useful.
I hope this helps TAC get this problem sorted.
Regards,
-- Adam.
++ Ascend Users Mailing List ++
To unsubscribe: send unsubscribe to ascend-users-request@bungi.com
To get FAQ'd: <http://www.nealis.net/ascend/faq>
References: