Correct me if I'm missing something. The content filter screens 
requests, and if it sees something that is on your blocked list, it 
redirects the browser to a web page where the user can make a case for 
not blocking the site. And the problem is that some folks directly 
access the form to suggest some anatomically inappropriate actions or 
whatever.

If that's the case, and you have some access to the content filter it 
seems like the easiest thing to do is to have the filter construct some 
form of authentication value that gets passed on to the .pl page. If it 
doesn't have a valid authentication token they get a different page 
explaining that the page they requested can't be accessed directly.

A crude/simple option would be to have the content filter do a simple 
encryption of the URL for the blocked site and pass the URL and the 
encrypted URL as parameters to the .pl page. Then the .pl page can 
decrypt the site URL and if it doesn't match the plain text URL you send 
them off to the other 'invalid access page'.  A smart user could pick up 
on what is going on and save off a URL/encrypted-URL pair so they can 
access the protected page anytime they want but it should keep the 
riff-raff out. And of course you can always beef up the authentication 
mechanism if you want.

--rick



Todd Young wrote:

> I think people are missing the point.....
>
> This is a number of schools, with a network of "publicly" used 
> computers, at least public in the sense that any number of students in 
> the schools can access these computers. Unless the ".pl page" is 
> accessible to the "outside" world, filtering by IP would not solve the 
> problem. If the page is accessible from the outside world, then a 
> filter to allow only IPs within the school system would be partially 
> effective.
>
> I think the only way to solve the problem would be to implement a "log 
> on" standard across all of the computers at all of the schools 
> involved. Forcing the students to log on to use a computer would 
> provide a two fold solution. First, it would get them used to proper 
> computer security in a shared-PC environment. Second, it would allow 
> you to "track" mischievous behavior. This is not a perfect solution, 
> but I don't think there is a perfect solution.
>
> There is a catch. If a student fails to properly log out of their 
> session, someone could use that session to send the mischievous 
> messages. Even if a student didn't send the message, but failed to 
> properly log out, they could be reprimanded for not following proper 
> security standards.
>
> Once the message gets out that "you can be tracked down by your 
> login", students will be less likely to cause problems, AND more aware 
> of security measures that protect their "identity".
>
> Callum Lerwick wrote:
>
>>> I run a content filter at a number of schools. When a site is banned 
>>> the
>>> user gets a .pl page to fill out on my server explaining why they 
>>> think the
>>> site should not be blocked. I get an email of their comments each 
>>> time the
>>> form is submitted. Lately, some people with too much time on their 
>>> hands are
>>> bringing the page up from my web site and sending me some cute, simple
>>> minded messages. Is there something I can add to httpd.conf that 
>>> will only
>>> allow the page to be pulled up if it is requested from a specific IP or
>>> network?
>>
>>
>>
>> If its a script to begin with, the cleanest thing would probably be to
>> just add some code to the script to ignore anyone coming from the wrong
>> IP. Dunno how to do it in perl offhand, but the REMOTE_ADDR cgi variable
>> should be what you want...
>
>



_______________________________________________
TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
http://www.mn-linux.org tclug-list at mn-linux.org
https://mailman.real-time.com/mailman/listinfo/tclug-list