You may need to set up the http_access option to allow requests from your IP addresses. Please see the Access Controls section for information about that.
If squid is in httpd-accelerator mode, it will accept normal HTTP requests and forward them to a HTTP server, but it will not honor proxy requests. If you want your cache to also accept proxy-HTTP requests then you must enable this feature:
httpd_accel_with_proxy onAlternately, you may have misconfigured one of your ACLs. Check the access.log and squid.conf files for clues.
local_domain
to work; Squid is caching the objects from my local servers.
The local_domain
directive does not prevent local
objects from being cached. It prevents the use of sibling caches
when fetching local objects. If you want to prevent objects from
being cached, use the cache_stoplist
or http_stop
configuration options (depending on your version).
Connection Refused
when the cache tries to retrieve an object located on a sibling, even though the sibling thinks it delivered the object to my cache.
If the HTTP port number is wrong but the ICP port is correct you
will send ICP queries correctly and the ICP replies will fool your
cache into thinking the configuration is correct but large objects
will fail since you don't have the correct HTTP port for the sibling
in your squid.conf file. If your sibling changed their
http_port
, you could have this problem for some time
before noticing.
If you see the Too many open files
error message, you
are most likely running out of file descriptors. This may be due
to running Squid on an operating system with a low filedescriptor
limit. This limit is often configurable in the kernel or with
other system tuning tools. There are two ways to run out of file
descriptors: first, you can hit the per-process limit on file
descriptors. Second, you can hit the system limit on total file
descriptors for all processes.
Linux kernel 2.2.12 and later supports "unlimited" number of open files without patching. So does most of glibc-2.1.1 and later (all areas touched by Squid is safe from what I can tell, even more so in later glibc releases). But you still need to take some actions as the kernel defaults to only allow processes to use up to 1024 filedescriptors, and Squid picks up the limit at build time.
Alternatively you can
If running things as root is not an option then get your sysadmin to install a the needed ulimit command in /etc/inittscript (see man initscript), install a patched kernel where INR_OPEN in include/linux/fs.h is changed to at least the amount you need or have them install a small suid program which sets the limit (see link below).
More information can be found from Henriks How to get many filedescriptors on Linux 2.2.X and later page.
Add the following to your /etc/system file and reboot to increase your maximum file descriptors per process:
set rlim_fd_max = 4096
Next you should re-run the configure script
in the top directory so that it finds the new value.
If it does not find the new limit, then you might try
editing include/autoconf.h and setting
#define DEFAULT_FD_SETSIZE
by hand. Note that
include/autoconf.h is created from autoconf.h.in
every time you run configure. Thus, if you edit it by
hand, you might lose your changes later on.
Jens-S. Voeckler advises that you should NOT change the default soft limit (rlim_fd_cur) to anything larger than 256. It will break other programs, such as the license manager needed for the SUN workshop compiler. Jens-S. also says that it should be safe to raise the limit for the Squid process as high as 16,384 except that there may be problems duruing reconfigure or logrotate if all of the lower 256 filedescriptors are in use at the time or rotate/reconfigure.
Do sysctl -a
and look for the value of
kern.maxfilesperproc
.
sysctl -w kern.maxfiles=XXXX sysctl -w kern.maxfilesperproc=XXXXWarning: You probably want
maxfiles
> maxfilesperproc
if you're going to be pushing the
limit.I don't think there is a formal upper limit inside the kernel. All the data structures are dynamically allocated. In practice there might be unintended metaphenomena (kernel spending too much time searching tables, for example).
For most BSD-derived systems (SunOS, 4.4BSD, OpenBSD, FreeBSD, NetBSD, BSD/OS, 386BSD, Ultrix) you can also use the ``brute force'' method to increase these values in the kernel (requires a kernel rebuild):
Do pstat -T
and look for the files
value, typically expressed as the ratio of current
maximum/.
One way is to increase the value of the maxusers
variable
in the kernel configuration file and build a new kernel. This method
is quick and easy but also has the effect of increasing a wide variety of
other variables that you may not need or want increased.
Another way is to find the param.c file in your kernel
build area and change the arithmetic behind the relationship between
maxusers
and the maximum number of open files.
Change the value of nfile
in usr/kvm/sys/conf.common/param.c/tt> by altering this equation:
int nfile = 16 * (NPROC + 16 + MAXUSERS) / 10 + 64;Where
NPROC
is defined by:
#define NPROC (10 + 16 * MAXUSERS)
Very similar to SunOS, edit /usr/src/sys/conf/param.c
and alter the relationship between maxusers
and the
maxfiles
and maxfilesperproc
variables:
int maxfiles = NPROC*2; int maxfilesperproc = NPROC*2;Where
NPROC
is defined by:
#define NPROC (20 + 16 * MAXUSERS)
The per-process limit can also be adjusted directly in the kernel
configuration file with the following directive:
options OPEN_MAX=128
Edit /usr/src/sys/conf/param.c
and adjust the
maxfiles
math here:
int maxfiles = 3 * (NPROC + MAXUSERS) + 80;Where
NPROC
is defined by:
#define NPROC (20 + 16 * MAXUSERS)
You should also set the OPEN_MAX
value in your kernel
configuration file to change the per-process limit.
NOTE: After you rebuild/reconfigure your kernel with more filedescriptors, you must then recompile Squid. Squid's configure script determines how many filedescriptors are available, so you must make sure the configure script runs again as well. For example:
cd squid-1.1.x make realclean ./configure --prefix=/usr/local/squid make
For example:
97/01/23 22:31:10| Removed 1 of 9 objects from bucket 3913 97/01/23 22:33:10| Removed 1 of 5 objects from bucket 4315 97/01/23 22:35:40| Removed 1 of 14 objects from bucket 6391
These log entries are normal, and do not indicate that squid has
reached cache_swap_high
.
Consult your cache information page in cachemgr.cgi for a line like this:
Storage LRU Expiration Age: 364.01 days
Objects which have not been used for that amount of time are removed as
a part of the regular maintenance. You can set an upper limit on the
LRU Expiration Age
value with reference_age
in the config
file.
Why, yes you can! Select the following menus:
This will bring up a box with icons for your various services. One of them should be a little ftp ``folder.'' Double click on this.
You will then have to select the server (there should only be one) Select that and then choose ``Properties'' from the menu and choose the ``directories'' tab along the top.
There will be an option at the bottom saying ``Directory listing style.'' Choose the ``Unix'' type, not the ``MS-DOS'' type.
--Oskar Pearson <oskar@is.co.za>
You are receiving ICP MISSes (via UDP) from a parent or sibling cache whose IP address your cache does not know about. This may happen in two situations.
on your parent squid.conf:
udp_outgoing_address proxy.parent.comon your squid.conf:
cache_peer proxy.parent.com parent 3128 3130
The standards for naming hosts ( RFC 952, RFC 1101) do not allow underscores in domain names:
A "name" (Net, Host, Gateway, or Domain name) is a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-), and period (.).The resolver library that ships with recent versions of BIND enforces this restriction, returning an error for any host with underscore in the hostname. The best solution is to complain to the hostmaster of the offending site, and ask them to rename their host.
See also the comp.protocols.tcp-ip.domains FAQ.
Some people have noticed that RFC 1033 implies that underscores are allowed. However, this is an informational RFC with a poorly chosen example, and not a standard by any means.
See the above question. The underscore character is not valid for hostnames.
Some DNS resolvers allow the underscore, so yes, the hostname might work fine when you don't use Squid.
To make Squid allow underscores in hostnames, re-run the configure script with this option:
% ./configure --enable-underscores ...and then recompile:
% make clean % make
The answer to this is somewhat complicated, so please hold on. NOTE: most of this text is taken from ICP and the Squid Web Cache.
An ICP query does not include any parent or sibling designation,
so the receiver really has no indication of how the peer
cache is configured to use it. This issue becomes important
when a cache is willing to serve cache hits to anyone, but only
handle cache misses for its paying users or customers. In other
words, whether or not to allow the request depends on if the
result is a hit or a miss. To accomplish this,
Squid acquired the miss_access
feature
in October of 1996.
The necessity of ``miss access'' makes life a little bit complicated,
and not only because it was awkward to implement. Miss access
means that the ICP query reply must be an extremely accurate prediction
of the result of a subsequent HTTP request. Ascertaining
this result is actually very hard, if not impossible to
do, since the ICP request cannot convey the
full HTTP request.
Additionally, there are more types of HTTP request results than there
are for ICP. The ICP query reply will either be a hit or miss.
However, the HTTP request might result in a ``304 Not Modified
'' reply
sent from the origin server. Such a reply is not strictly a hit since the peer
needed to forward a conditional request to the source. At the same time,
its not strictly a miss either since the local object data is still valid,
and the Not-Modified reply is quite small.
One serious problem for cache hierarchies is mismatched freshness parameters. Consider a cache C using ``strict'' freshness parameters so its users get maximally current data. C has a sibling S with less strict freshness parameters. When an object is requested at C, C might find that S already has the object via an ICP query and ICP HIT response. C then retrieves the object from S.
In an HTTP/1.0 world, C (and C's client) will receive an object that was never subject to its local freshness rules. Neither HTTP/1.0 nor ICP provides any way to ask only for objects less than a certain age. If the retrieved object is stale by Cs rules, it will be removed from Cs cache, but it will subsequently be fetched from S so long as it remains fresh there. This configuration miscoupling problem is a significant deterrent to establishing both parent and sibling relationships.
HTTP/1.1 provides numerous request headers to specify freshness
requirements, which actually introduces
a different problem for cache hierarchies: ICP
still does not include any age information, neither in query nor
reply. So S may return an ICP HIT if its
copy of the object is fresh by its configuration
parameters, but the subsequent HTTP request may result
in a cache miss due to any
Cache-control:
headers originated by C or by
C's client. Situations now emerge where the ICP reply
no longer matches the HTTP request result.
In the end, the fundamental problem is that the ICP query does not provide enough information to accurately predict whether the HTTP request will be a hit or miss. In fact, the current ICP Internet Draft is very vague on this subject. What does ICP HIT really mean? Does it mean ``I know a little about that URL and have some copy of the object?'' Or does it mean ``I have a valid copy of that object and you are allowed to get it from me?''
So, what can be done about this problem? We really need to change ICP so that freshness parameters are included. Until that happens, the members of a cache hierarchy have only two options to totally eliminate the ``access denied'' messages from sibling caches:
refresh_rules
parameters.miss_access
at all. Promise your sibling cache
administrator that your cache is properly configured and that you
will not abuse their generosity. The sibling cache administrator can
check his log files to make sure you are keeping your word.This means that another processes is already listening on port 8080 (or whatever you're using). It could mean that you have a Squid process already running, or it could be from another program. To verify, use the netstat command:
netstat -naf inet | grep LISTENThat will show all sockets in the LISTEN state. You might also try
netstat -naf inet | grep 8080If you find that some process has bound to your port, but you're not sure which process it is, you might be able to use the excellent lsof program. It will show you which processes own every open file descriptor on your system.
This means that the client socket was closed by the client
before Squid was finished sending data to it. Squid detects this
by trying to read(2)
some data from the socket. If the
read(2)
call fails, then Squid konws the socket has been
closed. Normally the read(2)
call returns ECONNRESET: Connection reset by peer
and these are NOT logged. Any other error messages (such as
EPIPE: Broken pipe are logged to cache.log. See the ``intro'' of
section 2 of your Unix manual for a list of all error codes.
These are caused by misbehaving Web clients attempting to use persistent connections. Squid-1.1 does not support persistent connections.
Version 2.5 will support Microsoft NTLM authentication. However, there are some limits on our support: We cannot proxy connections to a origin server that use NTLM authentication, but we can act as a web accelerator or proxy server and authenticate the client connection using NTLM.
We support NT4, Samba, and Windows 2000 Domain Controllers. For more information see winbind .
Why we cannot proxy NTLM even though we can use it. Quoting from summary at the end of the browser authentication section in this article:
In summary, Basic authentication does not require an implicit end-to-end state, and can therefore be used through a proxy server. Windows NT Challenge/Response authentication requires implicit end-to-end state and will not work through a proxy server.
Squid transparently passes the NTLM request and response headers between clients and servers. NTLM relies on a single end-end connection (possibly with men-in-the-middle, but a single connection every step of the way. This implies that for NTLM authentication to work at all with proxy caches, the proxy would need to tightly link the client-proxy and proxy-server links, as well as understand the state of the link at any one time. NTLM through a CONNECT might work, but we as far as we know that hasn't been implemented by anyone, and it would prevent the pages being cached - removing the value of the proxy.
NTLM authentication is carried entirely inside the HTTP protocol, but is not a true HTTP authentication protocol and is different from Basic and Digest authentication in many ways.
The reasons why it is not implemented in Netscape is probably:
This message was received at squid-bugs:
If you have only one parent, configured as:cache_peer xxxx parent 3128 3130 no-query defaultnothing is sent to the parent; neither UDP packets, nor TCP connections.
Simply adding default to a parent does not force all requests to be sent to that parent. The term default is perhaps a poor choice of words. A default parent is only used as a last resort. If the cache is able to make direct connections, direct will be preferred over default. If you want to force all requests to your parent cache(s), use the never_direct option:
acl all src 0.0.0.0/0.0.0.0 never_direct allow all
``Hot Mail'' is proxy-unfriendly and requires all requests to come from the same IP address. You can fix this by adding to your squid.conf:
hierarchy_stoplist hotmail.com
This is most likely because Squid is using more memory than it should be for your system. When the Squid process becomes large, it experiences a lot of paging. This will very rapidly degrade the performance of Squid. Memory usage is a complicated problem. There are a number of things to consider.
Then, examine the Cache Manager Info ouput and look at these two lines:
Number of HTTP requests received: 121104 Page faults with physical i/o: 16720Note, if your system does not have the getrusage() function, then you will not see the page faults line.
Divide the number of page faults by the number of connections. In this case 16720/121104 = 0.14. Ideally this ratio should be in the 0.0 - 0.1 range. It may be acceptable to be in the 0.1 - 0.2 range. Above that, however, and you will most likely find that Squid's performance is unacceptably slow.
If the ratio is too high, you will need to make some changes to lower the amount of memory Squid uses.
See also How much memory do I need in my Squid server?.
This could be a permission problem. Does the Squid userid have permission to execute the dnsserver program?
You might also try testing dnsserver from the command line:
> echo oceana.nlanr.net | ./dnsserverShould produce something like:
$name oceana.nlanr.net $h_name oceana.nlanr.net $h_len 4 $ipcount 1 132.249.40.200 $aliascount 0 $ttl 82067 $end
Bug reports for Squid should be registered in our bug database. Any bug report must include
Please note that bug reports are only processed if they can be reproduced or identified in the current STABLE or development versions of Squid. If you are running an older version of Squid the first response will be to ask you to upgrade unless the developer who looks at your bug report immediately can identify that the bug also exists in the current versions. It should also be noted that any patches provided by the Squid developer team will be to the current STABLE version even if you run an older version.
There are two conditions under which squid will exit abnormally and generate a coredump. First, a SIGSEGV or SIGBUS signal will cause Squid to exit and dump core. Second, many functions include consistency checks. If one of those checks fail, Squid calls abort() to generate a core dump.
Many people report that Squid doesn't leave a coredump anywhere. This may be due to one of the following reasons:
# sysctl -w kern.sugid_coredump=1
Resource Limits: These limits can usually be changed in shell scripts. The command to change the resource limits is usually either limit or limits. Sometimes it is a shell-builtin function, and sometimes it is a regular program. Also note that you can set resource limits in the /etc/login.conf file on FreeBSD and maybe other BSD systems.
To change the coredumpsize limit you might use a command like:
limit coredumpsize unlimitedor
limits coredump unlimited
Debugging Symbols: To see if your Squid binary has debugging symbols, use this command:
% nm /usr/local/squid/bin/squid | headThe binary has debugging symbols if you see gobbledegook like this:
0812abec B AS_tree_head 080a7540 D AclMatchedName 080a73fc D ActionTable 080908a4 r B_BYTES_STR 080908bc r B_GBYTES_STR 080908ac r B_KBYTES_STR 080908b4 r B_MBYTES_STR 080a7550 D Biggest_FD 08097c0c R CacheDigestHashFuncCount 08098f00 r CcAttrsThere are no debugging symbols if you see this instead:
/usr/local/squid/bin/squid: no symbolsDebugging symbols may have been removed by your install program. If you look at the squid binary from the source directory, then it might have the debugging symbols.
Coredump Location: The core dump file will be left in one of the following locations:
2000/03/14 00:12:36| Set Current Directory to /usr/local/squid/cacheIf you cannot find a core file, then either Squid does not have permission to write in its current directory, or perhaps your shell limits are preventing the core file from being written.
Often you can get a coredump if you run Squid from the command line like this (csh shells and clones):
% limit core un % /usr/local/squid/bin/squid -NCd1
Once you have located the core dump file, use a debugger such as dbx or gdb to generate a stack trace:
tirana-wessels squid/src 270% gdb squid /T2/Cache/core GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.15.1 (hppa1.0-hp-hpux10.10), Copyright 1995 Free Software Foundation, Inc... Core was generated by `squid'. Program terminated with signal 6, Aborted. [...] (gdb) where #0 0xc01277a8 in _kill () #1 0xc00b2944 in _raise () #2 0xc007bb08 in abort () #3 0x53f5c in __eprintf (string=0x7b037048 "", expression=0x5f <Address 0x5f out of bounds>, line=8, filename=0x6b <Address 0x6b out of bounds>) #4 0x29828 in fd_open (fd=10918, type=3221514150, desc=0x95e4 "HTTP Request") at fd.c:71 #5 0x24f40 in comm_accept (fd=2063838200, peer=0x7b0390b0, me=0x6b) at comm.c:574 #6 0x23874 in httpAccept (sock=33, notused=0xc00467a6) at client_side.c:1691 #7 0x25510 in comm_select_incoming () at comm.c:784 #8 0x25954 in comm_select (sec=29) at comm.c:1052 #9 0x3b04c in main (argc=1073745368, argv=0x40000dd8) at main.c:671
If possible, you might keep the coredump file around for a day or two. It is often helpful if we can ask you to send additional debugger output, such as the contents of some variables. But please note that a core file is only useful if paired with the exact same binary as generated the corefile. If you recompile Squid then any coredumps from previous versions will be useless unless you have saved the corresponding Squid binaries, and any attempts to analyze such coredumps will most certainly give misleading information about the cause to the crash.
If you CANNOT get Squid to leave a core file for you then one of the following approaches can be used
First alternative is to start Squid under the contol of GDB
% gdb /path/to/squid handle SIGPIPE pass nostop noprint run -DNYCd3 [wait for crash] backtrace quit
The drawback from the above is that it isn't really suitable to run on a production system as Squid then won't restart automatically if it crashes. The good news is that it is fully possible to automate the process above to automatically get the stack trace and then restart Squid. Here is a short automated script that should work:
#!/bin/sh trap "rm -f $$.gdb" 0 cat <<EOF >$$.gdb handle SIGPIPE pass nostop noprint run -DNYCd3 backtrace quit EOF while sleep 2; do gdb -x $$.gdb /path/to/squid 2>&1 | tee -a squid.out done
Other options if the above cannot be done is to:
a) Build Squid with the --enable-stacktraces option, if support exists for your OS (exists for Linux glibc on Intel, and Solaris with some extra libraries which seems rather impossible to find these days..)
b) Run Squid using the "catchsegv" tool. (Linux glibc Intel)
but these approaches does not by far provide as much details as using gdb.
If you believe you have found a non-fatal bug (such as incorrect HTTP processing) please send us a section of your cache.log with debugging to demonstrate the problem. The cache.log file can become very large, so alternatively, you may want to copy it to an FTP or HTTP server where we can download it.
It is very simple to enable full debugging on a running squid process. Simply use the -k debug command line option:
% ./squid -k debugThis causes every debug() statement in the source code to write a line in the cache.log file. You also use the same command to restore Squid to normal debugging level.
To enable selective debugging (e.g. for one source file only), you need to edit squid.conf and add to the debug_options line. Every Squid source file is assigned a different debugging section. The debugging section assignments can be found by looking at the top of individual source files, or by reading the file doc/debug-levels.txt (correctly renamed to debug-sections.txt for Squid-2). You also specify the debugging level to control the amount of debugging. Higher levels result in more debugging messages. For example, to enable full debugging of Access Control functions, you would use
debug_options ALL,1 28,9Then you have to restart or reconfigure Squid.
Once you have the debugging captured to cache.log, take a look at it yourself and see if you can make sense of the behaviour which you see. If not, please feel free to send your debugging output to the squid-users or squid-bugs lists.
Squid normally tests your system's DNS configuration before it starts server requests. Squid tries to resolve some common DNS names, as defined in the dns_testnames configuration directive. If Squid cannot resolve these names, it could mean:
To disable this feature, use the -D command line option.
Note, Squid does NOT use the dnsservers to test the DNS. The test is performed internally, before the dnsservers start.
Starting with version 1.1.15, we have required that you first run
squid -zto create the swap directories on your filesystem. If you have set the cache_effective_user option, then the Squid process takes on the given userid before making the directories. If the cache_dir directory (e.g. /var/spool/cache) does not exist, and the Squid userid does not have permission to create it, then you will get the ``permission denied'' error. This can be simply fixed by manually creating the cache directory.
# mkdir /var/spool/cache # chown <userid> <groupid> /var/spool/cache # squid -z
Alternatively, if the directory already exists, then your operating system may be returning ``Permission Denied'' instead of ``File Exists'' on the mkdir() system call. This patch by Miquel van Smoorenburg should fix it.
Either (1) the Squid userid does not have permission to bind to the port, or (2) some other process has bound itself to the port. Remember that root privileges are required to open port numbers less than 1024. If you see this message when using a high port number, or even when starting Squid as root, then the port has already been opened by another process. Maybe you are running in the HTTP Accelerator mode and there is already a HTTP server running on port 80? If you're really stuck, install the way cool lsof utility to show you which process has your port in use.
This is explained in the Redirector section.
See the next question.
Note: The information here applies to version 2.2 and earlier.
Squid keeps an in-memory bitmap of disk files that are available for use, or are being used. The size of this bitmap is determined at run name, based on two things: the size of your cache, and the average (mean) cache object size.
The size of your cache is specified in squid.conf, on the cache_dir lines. The mean object size can also be specified in squid.conf, with the 'store_avg_object_size' directive. By default, Squid uses 13 Kbytes as the average size.
When allocating the bitmaps, Squid allocates this many bits:
2 * cache_size / store_avg_object_size
So, if you exactly specify the correct average object size, Squid should have 50% filemap bits free when the cache is full. You can see how many filemap bits are being used by looking at the 'storedir' cache manager page. It looks like this:
Store Directory #0: /usr/local/squid/cache First level subdirectories: 4 Second level subdirectories: 4 Maximum Size: 1024000 KB Current Size: 924837 KB Percent Used: 90.32% Filemap bits in use: 77308 of 157538 (49%) Flags:
Now, if you see the ``You've run out of swap file numbers'' message, then it means one of two things:
To check the average file size of object currently in your cache, look at the cache manager 'info' page, and you will find a line like:
Mean Object Size: 11.96 KB
To make the warning message go away, set 'store_avg_object_size' to that value (or lower) and then restart Squid.
Note: The information here is current for version 2.3
Calm down, this is now normal. Squid now dynamically allocates filemap bits based on the number of objects in your cache. You won't run out of them, we promise.
In Unix, things like processes and files have an owner. For Squid, the process owner and file owner should be the same. If they are not the same, you may get messages like ``permission denied.''
To find out who owns a file, use the ls -l command:
% ls -l /usr/local/squid/logs/access.log
A process is normally owned by the user who starts it. However, Unix sometimes allows a process to change its owner. If you specified a value for the effective_user option in squid.conf, then that will be the process owner. The files must be owned by this same userid.
If all this is confusing, then you probably should not be running Squid until you learn some more about Unix. As a reference, I suggest Learning the UNIX Operating System, 4th Edition.
If I try by way of a test, to access
ftp://username:password@ftpserver/somewhere/foo.tar.gzI get
somewhere/foo.tar.gz: Not a directory.
Use this URL instead:
ftp://username:password@ftpserver/%2fsomewhere/foo.tar.gz
This means your pinger program does not have root priveleges. You should either do this:
% su # make install-pingeror
# chown root /usr/local/squid/bin/pinger # chmod 4755 /usr/local/squid/bin/pinger
A forwarding loop is when a request passes through one proxy more than once. You can get a forwarding loop if
Forwarding loops are detected by examining the Via request header. Each cache which "touches" a request must add its hostname to the Via header. If a cache notices its own hostname in this header for an incoming request, it knows there is a forwarding loop somewhere.
NOTE: Squid may report a forwarding loop if a request goes through two caches that have the same visible_hostname value. If you want to have multiple machines with the same visible_hostname then you must give each machine a different unique_hostname so that forwarding loops are correctly detected.
When Squid detects a forwarding loop, it is logged to the cache.log file with the recieved Via header. From this header you can determine which cache (the last in the list) forwarded the request to you.
One way to reduce forwarding loops is to change a parent relationship to a sibling relationship.
Another way is to use cache_peer_access rules. For example:
# Our parent caches cache_peer A.example.com parent 3128 3130 cache_peer B.example.com parent 3128 3130 cache_peer C.example.com parent 3128 3130 # An ACL list acl PEERS src A.example.com acl PEERS src B.example.com acl PEERS src C.example.com # Prevent forwarding loops cache_peer_access A.example.com allow !PEERS cache_peer_access B.example.com allow !PEERS cache_peer_access C.example.com allow !PEERSThe above configuration instructs squid to NOT forward a request to parents A, B, or C when a request is received from any one of those caches.
This error message is seen mostly on Solaris systems. Mark Kennedy gives a great explanation:
Error 71 [EPROTO] is an obscure way of reporting that clients made it onto your server's TCP incoming connection queue but the client tore down the connection before the server could accept it. I.e. your server ignored its clients for too long. We've seen this happen when we ran out of file descriptors. I guess it could also happen if something made squid block for a long time.
Got these messages in my cache log - I guess it means that the index contents do not match the contents on disk.
1998/09/23 09:31:30| storeSwapInFileOpened: /var/cache/00/00/00000015: Size mismatch: 776(fstat) != 3785(object) 1998/09/23 09:31:31| storeSwapInFileOpened: /var/cache/00/00/00000017: Size mismatch: 2571(fstat) != 4159(object)
What does Squid do in this case?
NOTE, these messages are specific to Squid-2. These happen when Squid reads an object from disk for a cache hit. After it opens the file, Squid checks to see if the size is what it expects it should be. If the size doesn't match, the error is printed. In this case, Squid does not send the wrong object to the client. It will re-fetch the object from the source.
These messages are caused by buggy clients, mostly Netscape Navigator. What happens is, Netscape sends an HTTPS/SSL request over a persistent HTTP connection. Normally, when Squid gets an SSL request, it looks like this:
CONNECT www.buy.com:443 HTTP/1.0Then Squid opens a TCP connection to the destination host and port, and the real request is sent encrypted over this connection. Thats the whole point of SSL, that all of the information must be sent encrypted.
With this client bug, however, Squid receives a request like this:
GET https://www.buy.com/corp/ordertracking.asp HTTP/1.0 Accept: */* User-agent: Netscape ... ...Now, all of the headers, and the message body have been sent, unencrypted to Squid. There is no way for Squid to somehow turn this into an SSL request. The only thing we can do is return the error message.
Note, this browser bug does represent a security risk because the browser is sending sensitive information unencrypted over the network.
by Dave J Woolley (DJW at bts dot co dot uk)
These are illegal URLs, generally only used by illegal sites; typically the web site that supports a spammer and is expected to survive a few hours longer than the spamming account.
Their intention is to:
Any browser or proxy that works with them should be considered a security risk.
RFC 1738 has this to say about the hostname part of a URL:
The fully qualified domain name of a network host, or its IP address as a set of four decimal digit groups separated by ".". Fully qualified domain names take the form as described in Section 3.5 of RFC 1034 [13] and Section 2.1 of RFC 1123 [5]: a sequence of domain labels separated by ".", each domain label starting and ending with an alphanumerical character and possibly also containing "-" characters. The rightmost domain label will never start with a digit, though, which syntactically distinguishes all domain names from the IP addresses.
Whitespace characters (space, tab, newline, carriage return) are not allowed in URI's and URL's. Unfortunately, a number of Web services generate URL's with whitespace. Of course your favorite browser silently accomodates these bad URL's. The servers (or people) that generate these URL's are in violation of Internet standards. The whitespace characters should be encoded.
If you want Squid to accept URL's with whitespace, you have to decide how to handle them. There are four choices that you can set with the uri_whitespace option:
This likely means that your system does not have a loopback network device, or that device is not properly configured. All Unix systems should have a network device named lo0, and it should be configured with the address 127.0.0.1. If not, you may get the above error message. To check your system, run:
% ifconfig lo0The result should look something like:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet 127.0.0.1 netmask 0xff000000
If you use FreeBSD, see this.
The format of the cache_dir option changed with version
2.3. It now takes a type argument. All you need to do
is insert ufs
in the line, like this:
cache_dir ufs /var/squid/cache ...
As of Squid 2.3, the default is to use internal DNS lookup code. The cache_dns_program and dns_children options are not known squid.conf directives in this case. Simply comment out these two options.
If you want to use external DNS lookups, with the dnsserver program, then add this to your configure command:
--disable-internal-dns
Sort of. As of Squid 2.3, the default is to use internal DNS lookup code. The dns_defnames option is only used with the external dnsserver processes. If you relied on dns_defnames before, you have three choices:
search
and domain
lines from /etc/resolv.conf.``Connection reset by peer'' is an error code that Unix operating systems sometimes return for read, write, connect, and other system calls.
Connection reset means that the other host, the peer, sent us a RESET packet on a TCP connection. A host sends a RESET when it receives an unexpected packet for a nonexistent connection. For example, if one side sends data at the same time that the other side closes a connection, when the other side receives the data it may send a reset back.
The fact that these messages appear in Squid's log might indicate a problem, such as a broken origin server or parent cache. On the other hand, they might be ``normal,'' especially since some applications are known to force connection resets rather than a proper close.
You probably don't need to worry about them, unless you receive a lot of user complaints relating to SSL sites.
Rick Jones notes that if the server is running a Microsoft TCP stack, clients receive RST segments whenever the listen queue overflows. In other words, if the server is really busy, new connections receive the reset message. This is contrary to rational behaviour, but is unlikely to change.
This is an error message, generated by your operating system, in response to a connect() system call. It happens when there is no server at the other end listening on the port number that we tried to connect to.
Its quite easy to generate this error on your own. Simply telnet to a random, high numbered port:
% telnet localhost 12345 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refusedIt happens because there is no server listening for connections on port 12345.
When you see this in response to a URL request, it probably means the origin server web site is temporarily down. It may also mean that your parent cache is down, if you have one.
You may get this message when you run commands like squid -krotate
.
This error message usually means that the squid.pid file is missing. Since the PID file is normally present when squid is running, the absence of the PID file usually means Squid is not running. If you accidentally delete the PID file, Squid will continue running, and you won't be able to send it any signals.
If you accidentally removed the PID file, there are two ways to get it back.
ps
and find the Squid process id. You'll probably see
two processes, like this:
bender-wessels % ps ax | grep squid 83617 ?? Ss 0:00.00 squid -s 83619 ?? S 0:00.48 (squid) -s (squid)You want the second process id, 83619 in this case. Create the PID file and put the process id number there. For example:
echo 83619 > /usr/local/squid/logs/squid.pid
squid -kreconfigure
:
kill -HUP 83619The reconfigure process creates a new PID file automatically.
You are probably starting Squid as root. Squid is trying to find a group-id that doesn't have any special priveleges that it will run as. The default is nogroup, but this may not be defined on your system. You need to edit squid.conf and set cache_effective_group to the name of an unpriveledged group from /etc/group. There is a good chance that nobody will work for you.
Note: The information here is current for version 2.3.
This is correct. Squid does not know what to do with an https URL. To handle such a URL, Squid would need to speak the SSL protocol. Unfortunately, it does not (yet).
Normally, when you type an https URL into your browser, one of two things happens.
The CONNECT method is a way to tunnel any kind of connection through an HTTP proxy. The proxy doesn't understand or interpret the contents. It just passes bytes back and forth between the client and server. For the gory details on tunnelling and the CONNECT method, please see RFC 2817 and Tunneling TCP based protocols through Web proxy servers (expired).
There may be many causes for this.
Andrew Doroshenko reports that removing /dev/null, or mounting a filesystem with the nodev option, can cause Squid to use 100% of CPU. His suggested solution is to ``touch /dev/null.''
Mikael Andersson reports that clicking on Webmin's cachemgr.cgi link creates numerous instances of cachemgr.cgi that quickly consume all available memory and brings the system to its knees.
Joe Cooper reports this to be caused by SSL problems in some browsers (mainly Netscape 6.x/Mozilla) if your Webmin is SSL enabled. Try with another browser such as Netscape 4.x or Microsoft IE, or disable SSL encryption in Webmin.
Some versions of GCC (notably 2.95.1 through 2.95.4 at least) have bugs with compiler optimization. These GCC bugs may cause NULL pointer accesses in Squid, resulting in a ``FATAL: Received Segment Violation...dying'' message and a core dump.
You can work around these GCC bugs by disabling compiler optimization. The best way to do that is start with a clean source tree and set the CC options specifically:
% cd squid-x.y % make distclean % setenv CFLAGS='-g -Wall' % ./configure ...
To check that you did it right, you can search for AC_CFLAGS in src/Makefile:
% grep AC_CFLAGS src/Makefile AC_CFLAGS = -g -WallNow when you recompile, GCC won't try to optimize anything:
% make Making all in lib... gcc -g -Wall -I../include -I../include -c rfc1123.c ...etc...
NOTE: some people worry that disabling compiler optimization will negatively impact Squid's performance. The impact should be negligible, unless your cache is really busy and already runs at a high CPU usage. For most people, the compiler optimization makes little or no difference at all.
By Yomler of fnac.net
A combination of a bad configuration of Internet Explorer and any application which use the cydoor DLLs will produce the entry in the log. See cydoor.com for a complete list.
The bad configuration of IE is the use of a active configuration script (proxy.pac) and an active or inactive, but filled proxy settings. IE will only use the proxy.pac. Cydoor aps will use both and will generate the errors.
Disabling the old proxy settings in IE is not enought, you should delete them completely and only use the proxy.pac for example.
By Henrik Nordström
Some people have asked why requests for domain names using national symbols as "supported" by the certain domain registrars does not work in Squid. This is because there as of yet is no standard on how to manage national characters in the current Internet protocols such as HTTP or DNS. The current Internet standards is very strict on what is an acceptable hostname and only accepts A-Z a-z 0-9 and - in Internet hostname labels. Anything outside this is outside the current Internet standards and will cause interoperability issues such as the problems seen with such names and Squid.
When there is a consensus in the DNS and HTTP standardization groups on how to handle international domain names Squid will be changed to support this if any changes to Squid will be required.
If you are interested in the progress of the standardization process for international domain names please see the IETF IDN working group's dedicated page.
This happens when Squid makes a TCP connection to an origin server, but for some reason, the connection is closed before Squid reads any data. Depending on various factors, Squid may be able to retry the request again. If you see the ``Zero Sized Reply'' error message, it means that Squid was unable to retry, or that all retry attempts also failed.
What causes a connection to close prematurely? It could be a number of things, including:
You may be able to use tcpdump to track down and observe the problem.
Some users believe the problem is caused by very large cookies. One user reports that his Zero Sized Reply problem went away when he told Internet Explorer to not accept third-party cookies.
Here are some things you can try to reduce the occurance of the Zero Sized Reply error:
echo 0 > /proc/sys/net/ipv4/tcp_ecn/
.If this error causes serious problems for you and the above does not help, Squid developers would be happy to help you uncover the problem. However, we will require high-quality debugging information from you, such as tcpdump output, server IP addresses, operating system versions, and access.log entries with full HTTP headers.
If you want to make Squid give the Zero Sized error on demand, you can use the short C program below. Simply compile and start the program on a system that doesn't already have a server running on port 80. Then try to connect to this fake server through Squid:
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <assert.h> int main(int a, char **b) { struct sockaddr_in S; int s,t,x; s = socket(PF_INET, SOCK_STREAM, 0); assert(s > 0); memset(&S, '\0', sizeof(S)); S.sin_family = AF_INET; S.sin_port = htons(80); x = bind(s, (struct sockaddr *) &S, sizeof(S)); assert(x == 0); x = listen(s, 10); assert(x == 0); while (1) { struct sockaddr_in F; int fl = sizeof(F); t = accept(s, (struct sockaddr *) &F, &fl); fprintf(stderr, "accpeted FD %d from %s:%d\n", t, inet_ntoa(F.sin_addr), (int)ntohs(F.sin_port)); close(t); fprintf(stderr, "closed FD %d\n", t); } return 0; }
by Grzegorz Janoszka
This error message appears when you try downloading large file using GET or uploading it using POST/PUT. There are three parameters to look for: request_body_max_size, reply_body_max_size (these two are set to 0 by default now, which means no limits at all, earlier version of squid had e.g. 1MB in request) and request_header_max_size - it defaults to 10kB (now, earlier versions had here 4 or even 2 kB) - in some rather rare circumstances even 10kB is too low, so you can increase this value.
In some situations where swap.state has been corrupted Squid can be very confused about how much data it has in the cache. Such corruption may happen after a power failure or similar fatal event. To recover first stop Squid, then delete the swap.state files from each cache directory and then start Squid again. Squid will automatically rebuild the swap.state index from the cached files reasonably well.
If this does not work or causes too high load on your server due to the reindexing of the cache then delete the cache content as explained in I want to restart Squid with a clean cache.
By Janno de Wit
There seems to be some problems with Microsoft Windows to access the Windows Update website. This is especially a problem when you block all traffic by a firewall and force your users to go through the Squid Cache.
Symptom: WindowsUpdate gives error codes like 0x80072EFD and cannot update, automatic updates aren't working too.
Cause: In earlier Windows-versions WindowsUpdate takes the proxy-settings from Internet Explorer. Since XP SP2 this is not sure. At my machine I ran Windows XP SP1 without WindowsUpdate problems. When I upgraded to SP2 WindowsUpdate started to give errors when searching updates etc.
The problem was that WU did not go through the proxy and tries to establish direct HTTP connections to Update-servers. Even when I set the proxy in IE again, it didn't help . It isn't Squid's problem that Windows Update doesn't work, but it is in Windows itself. The solution is to use the 'proxycfg' tool shipped with Windows XP. With this tool you can set the proxy for WinHTTP.
Commands:
C:\> proxycfg # gives information about the current connection type. Note: 'Direct Connection' does not force WU to bypass proxy C:\> proxycfg -d # Set Direct Connection C:\> proxycfg -p wu-proxy.lan:8080 # Set Proxy to use with Windows Update to wu-proxy.lan, port 8080 c:\> proxycfg -u # Set proxy to Internet Explorer settings.