I am running a router with FreeBSD + pf + squid. If I block some sites through squid, clients adds proxy server to their browsers and can access those sites. How to disable them from accessing websites through proxy servers and enable just going out through squid?

The only real way I can see is running Squid as a transparent proxy on localhost and using PF on the internal interface to redirect http traffic ports (see Safe Ports in squid.conf, but not 443!) to Squid.

Then you'll have to lock down the internal interface, only opening the necessary/allowed ports outbound (or people will just point their browsers to http://some_proxy:9156 and defeat your redirection.

This is not airtight. You will have to allow port 443 through (SSL and transparent proxies don't mix), and probably ports like 22 as well.

So external proxies that can be reached by SSL/SSH (either directly (proxies running on port 443 or 22 exist) or using tunnels) can still be contacted and used.

Simplest method is to block all outgoing requests, except those from your proxy server. If they don't use the proxy, they don't get Internet access. Start with a "deny all" policy.

Then, add rules to allow specific protocols to/from specific IPs on specific ports, as needed, for access to other services. Don't use any rules like "allow ip from localnet to any 25". Always specify an IP (don't use "any").

Even transparent proxy setups need to allow initial DNS lookups, and these DNS lookups are a way of getting around restricions: proxies or VPNs running on port 53, or, if you prevent that, take a look at http-over-DNS! Automatically configuring your proxy (for instance, using that rather horrid 'wpad' protocol) may allow you to close the DNS hole, but it will trip up some browsers. The other way is to mess around with captive portals and dual horizon DNS - find out about them if you'd like a headache.
You will have to allow https: on port 443, and once you allow encrypted traffic through, you have no control over what that encrypted traffic may be.

In conclusion, do what you can, but be aware that nothing can be 100% secure.

__________________The only dumb question is a question not asked.
The only dumb answer is an answer not given.

At the company where I am currently consulting, the local network tiers are isolated from one another by firewalls. All but the externally facing tier are completely isolated from the Internet; DNS is local only (of course), and Internet addresses are not reachable via any router. Only the externally facing tier (the DMZ, if you like) has direct Internet access.

End users are limited to restricted, monitored, and authenticated proxy connections via http/s on ports 80/443, and, only if their management approves and funds the individual's access on an annual basis. IP addresses may not be used in URLs; the monitoring software requires domain names.