In this tutorial I'm going to show you how to build a threaded tcp server with C#. If you've ever worked with Window's sockets, you know how difficult this can sometimes be. However, thanks to the .NET framework, making one is a lot easier than it used to be.

What we'll be building today is a very simple server that accepts client connections and can send and receive data. The server spawns a thread for each client and can, in theory, accept as many connections as you want (although in practice this is limited because you can only spawn so many threads before Windows will get upset).

Let's just jump into some code. Below is the basic setup for our TCP server class.

So here's a basic server class - without the guts. We've got a TcpListener which does a good job of wrapping up the underlying socket communication, and a Thread which will be listening for client connections. You might have noticed the function ListenForClients that is used for our ThreadStart delegate. Let's see what that looks like.

This function is pretty simple. First it starts our TcpListener and then sits in a loop accepting connections. The call to AcceptTcpClient will block until a client has connected, at which point we fire off a thread to handle communication with our new client. I used a ParameterizedThreadStart delegate so I could pass the TcpClient object returned by the AcceptTcpClient call to our new thread.

The function I used for the ParameterizedThreadStart is called HandleClientComm. This function is responsible for reading data from the client. Let's have a look at it.

The first thing we need to do is cast client as a TcpClient object since the ParameterizedThreadStart delegate can only accept object types. Next, we get the NetworkStream from the TcpClient, which we'll be using to do our reading. After that we simply sit in a while true loop reading information from the client. The Read call will block indefinitely until a message from the client has been received. If you read zero bytes from the client, you know the client has disconnected. Otherwise, a message has been successfully received from the server. In my example code, I simply convert the byte array to a string and push it to the debug console. You will, of course, do something more interesting with the data - I hope. If the socket has an error or the client disconnects, you should call Close on the TcpClient object to free up any resources it was using.

Believe it or not, that's pretty much all you need to do to create a threaded server that accepts connections and reads data from clients. However, a server isn't very useful if it can't send data back, so let's look at how to send data to one of our connected clients.

Do you remember the TcpClient object that was returned from the call AcceptTcpClient? Well, that's the object we'll be using to send data back to that client. That being said, you'll probably want to keep those objects around somewhere in your server. I usually keep a collection of TcpClient objects that I can use later. Sending data to connected clients is very simple. All you have to do is call Write on the the client's NetworkStream object and pass it the byte array you'd like to send.

Your TCP server is now finished. The hard part is defining a good protocol to use for sending information between the client and server. Application level protocols are generally unique for application, so I'm not going to go into any details - you'll just have to invent you're own.

But what use is a server without a client to connect to it? This tutorial is mainly about the server, but here's a quick piece of code that shows you how to set up a basic TCP connection and send it a piece of data.

The first thing we need to do is get the client connected to the server. We use the TcpClient.Connect method to do this. It needs the IPEndPoint of our server to make the connection - in this case I connect it to localhost on port 3000. I then simply send the server the string "Hello Server!".

One very important thing to remember is that one write from the client or server does not always equal one read on the receiving end. For instance, your client could send 10 bytes to the server, but the server may not get all 10 bytes the first time it reads. Using TCP, you're pretty much guaranteed to eventually get all 10 bytes, but it might take more than one read. You should keep that in mind when designing your protocol.

That's it! Now get out there and clog the tubes with your fancy new C# TCP servers.

Friday, April 25, 2014

If you found WIFI problem with your handheld scanner Motorola MC3190 this article could be the solution for you. We have already 10 monochrome scanners to scan garment in the factory. Recently we buy 6 units of Motorola MC3190 colour with Win CE6.0 as its operating system. Although we happy with new colour handheld scanner, we have a problem with it's WIFI.

When we do cold boot (press 1+9+power) the wifi is always disabled, we have to enable it first to use it. This is not acceptable because user will have to access windows first to enable the wifi.

After googling i found that the problem is not unique, many people having the same problem. Searching more i found posting about this problem from .Fret Developer (http://dotfret.blogspot.com/2010/10/wireless-radio-disabled-on-cold-boot-of.html). The blog owner is a savvy .NET developer.

" After a cold boot of your Motorola / Symbol MC9500 series device, if you find that your wireless radio is disabled, create the following .reg file and add it to the \Application folder on the device;"

This solution is also solved my problem. Albeit I'm not really know the different but since it come from my supplier which is Motorola distributor and it seems more complex than the first one from .fret blog i decided to use this file.

Once again without .fret blog i may not found the solution quickly. Thank you.

Make sure the check_nrpe2 plugin can talk to the NRPE daemon on the remote host. Replace "192.168.13.156" in the command below with the IP address of the remote host that has NRPE installed. Run following command on the Nagios Server:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156
NRPE v2.15

On the Nagios Server, run following command for testing:
# /usr/local/libexec/nagios/check_nrpe2 -H 192.168.13.156 -c check_total_procs

Note: Nagios never uses these values itself, but you can access them by using the $ADMINEMAIL$ and $ADMINPAGER$ macros in your notification commands.

Define Generic Contact Template in templates.cfg:

Nagios installation gives a default generic contact template that can be used as a reference to build your contacts. Please note that all the directives mentioned in the generic-contact template below are mandatory. So, if you've decided not to use the generic-contact template definition in your contacts, you should define all these mandatory definitions inside your contacts yourself.

The following generic-contact is already available under /usr/local/etc/nagios/objects/templates.cfg. Also, the templates.cfg is included in the nagios.cfg by default as shown below.

Please note that any of these directives mentioned in the templates.cfg can be overridden when you define a real contact using this generic-template.

Name - This defines the name of the contact template (generic-contact).

service_notification_period - This defines when nagios can send notification about services issues (for example, Apache down). By default this is 24×7 timeperiod, which is defined under /usr/local/etc/nagios/objects/timeperiods.cfg

host_notification_period - This defines when nagios can send notification about host issues (for example, server crashed). By default, this is 24×7 timeperiod.

service_notification_options - This defines the type of service notification that can be sent out. By default this defines all possible service states including flapping events. This also includes the scheduled service downtime activities.

host_notification_options - This defines the type of host notifications that can be sent out. By default this defines all possible host states including flapping events. This also includes the scheduled host downtime activities.

service_notification_commands - By default this defines that the contact should get notification about service issues (for example, database down) via email. You can also define additional commands and add it to this directive. For example, you can define your own notify-service-by-sms command.

host_notification_commands - By default this defines that the contact should get notification about host issues (for example, host down) via email. You can also define additional commands and add it to this directive. For example, you can define your own notify-host-by-sms command.

Define Individual Contacts in contacts.cfg:

One you've confirmed that the generic-contact templates is defined properly, you can start defining individual contacts definition for all the people in your organization who would ever receive any notifications from nagios. Please note that just by defining a contact doesn't mean that they'll get notification. Later you have to associate this contact to either a service or host definition as shown in the later sections below. So, feel free to define all possible contacts here. (for example, Developers, DBAs, Sysadmins, IT-Manager, Customer Service Manager, Top Management etc.)

Note: Define these contacts in /usr/local/etc/nagios/objects/contacts.cfg

Once you've defined the individual contacts, you can also group them together to send the appropriate notifications. For example, only DBAs needs to be notified about the database down service definition. So, a db-admins group may be required. Also, may be only Unix system administrators needs to be notified when Apache goes down. So, a unix-admins group may be required. Feel free to define as many groups as you think is required. Later you can use these groups in the individual service and host definitions.

We will create a new configuration file for all FreeBSD servers on the LAN:
# touch /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg
# vi /usr/local/etc/nagios/objects/lan-freebsd-servers.cfg

Note: you can either edit the existing localhost.cfg or create the lan-freebsd-servers.cfg file.

###############################################################################
# LOCALHOST.CFG - SAMPLE OBJECT CONFIG FILE FOR MONITORING THIS MACHINE
#
# Last Modified: 03-03-2011
#
# NOTE: This config file is intended to serve as an *extremely* simple
# example of how you can create configuration entries to monitor
# the local (FreeBSD) machine.
#
###############################################################################
###############################################################################
###############################################################################
#
# HOST DEFINITION
#
###############################################################################
###############################################################################
# Define a host for the local machine
define host{
use freebsd-server ; Inherit default values from a template
host_name test-bsd ; The name we're giving to this host
alias My TEST BSD ; A longer name associated with the host
address 192.168.13.156 ; IP address of the host
}
define host{
use freebsd-server ; Inherit default values from a template
host_name dev01 ; The name we're giving to this host
alias dev01 ; A longer name associated with the host
address 192.168.13.157 ; IP address of the host
}
define host{
use freebsd-server ; Inherit default values from a template
host_name web1 ; The name we're giving to this host
alias Online Web ; A longer name associated with the host
address 192.168.13.242 ; IP address of the host
}
define host{
use freebsd-server ; Inherit default values from a template
host_name bsd-sql ; The name we're giving to this host
alias Online SQL ; A longer name associated with the host
address 192.168.13.108 ; IP address of the host
}
define host{
use freebsd-server ; Inherit default values from a template
host_name fw1 ; The name we're giving to this host
alias Firewall Server ; A longer name associated with the host
address 192.168.13.2 ; IP address of the host
}
###############################################################################
###############################################################################
#
# SERVICE DEFINITIONS
#
###############################################################################
###############################################################################
# Define a service to "ping" the local machine
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql,fw1,dev01
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}
# Define a service to check SSH on the local machine.
# Disable notifications for this service by default, as not all users may have SSH enabled.
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql
service_description SSH
check_command check_ssh
notifications_enabled 0
}
# Define a service to check HTTP.
# Disable notifications for this service by default, as not all users may have HTTP enabled.
define service{
use generic-service ; Name of service template to use
host_name web1
service_description HTTP
check_command check_http
contact_groups adminsnotifications_enabled 1
}
### A more advanced definition for monitoring the HTTP service is shown below. This service definition will check to see if the /index.php URI contains the string "html". It will produce an error if the string isn't found, the URI isn't valid, or the web server takes longer than 5 seconds to respond.
### If you are checking a virtual server that uses 'host headers' you must supply the FQDN (fully qualified domain name) as the [host_name] argument.
define service{
use generic-service ; Name of service template to use
host_name web1
service_description HTTP
check_command check_http!-u /index.php -t 5 -s "html"contact_groups adminsnotifications_enabled 1
}
### Note: For more advanced monitoring, run the check_http plugin manually with --help as a command-line argument to see all the options you can give the plugin.
### # /usr/local/libexec/nagios/check_http --help
### # /usr/local/libexec/nagios/check_http -H localhost
# Define a service to check the number of currently logged in users.
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql,fw1,dev01
service_description Current Users
check_command check_nrpe2!check_users
}
# Define a service to check the root partition of the disk.
define service{
use generic-service ; Name of service template to use
host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01
service_description / partition
check_command check_nrpe2!check_root
}
# Define a service to check the /usr partition of the disk.
define service{
use generic-service ; Name of service template to use
host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01
service_description /usr partition
check_command check_nrpe2!check_usr
}
# Define a service to check the /var partition of the disk.
define service{
use generic-service ; Name of service template to use
host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01
service_description /var partition
check_command check_nrpe2!check_var
}
# Define a service to check the /tmp partition of the disk.
define service{
use generic-service ; Name of service template to use
host_name localhost,test-bsd,web1,bsd-sql,fw1,dev01
service_description /tmp partition
check_command check_nrpe2!check_tmp
}
# Define a service to check the load.
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql,fw1,dev01
service_description Current Load
check_command check_nrpe2!check_load
}
# Define a service to check zombie processes.
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql,fw1,dev01
service_description Zombie Processes
check_command check_nrpe2!check_zombie_procs
}
# Define a service to check total processes.
define service{
use generic-service ; Name of service template to use
host_name test-bsd,web1,bsd-sql,fw1,dev01
service_description total Processes
check_command check_nrpe2!check_total_procs
}
# Define a service to check mysql uptime.
define service{
use generic-service ; Name of service template to use
host_name bsd-sql
service_description MySQL Uptime
check_command check_nrpe2!check_mysql_health_uptime
}
# Define a service to check mysql slave io running.
define service{
use generic-service ; Name of service template to use
host_name bsd-sql
service_description MySQL Slave IO
check_command check_nrpe2!check_mysql_health_slave-io-running
}
# Define a service to check mysql slave sql running.
define service{
use generic-service ; Name of service template to use
host_name bsd-sql
service_description MySQL Slave SQL
check_command check_nrpe2!check_mysql_health_slave-sql-running
}

Note: comma separated. No Space in between!

Add other FreeBSD hosts on the LAN to the host group member list.
# vi /usr/local/etc/nagios/objects/localhost.cfg

define hostgroup{
hostgroup_name freebsd-servers ; The name of the hostgroup
alias FreeBSD Servers ; Long name of the group
members localhost,test-bsd,web1,bsd-sql,fw1 ; Comma separated list of hosts that belong to this group
}

You can test some of these by running the following commands on Nagios Client:
# /usr/local/libexec/nagios/check_mysql_health --hostname localhost --username nagios --password nagios --mode uptime --warning 2 --critical 5

Note: this command above will trigger a WARNING if mysql uptime is greater than 2 minutes; will trigger a CRITICAL if mysql uptime is greater than 5 minutes.

Pleae note, that the thresholds must be specified according to the Nagios plug-in development Guidelines.

Tuesday, April 15, 2014

Summary
A well-written API can be a great asset to the organization that wrote it and to all that use it. Given the importance of good API design, surprisingly little has been written on the subject. In this talk (recorded at Javapolis), Java library designer Joshua Bloch teaches how to design good APIs, with many examples of what good and bad APIs look like.

Look at the table eav_attribute. Find the row with attribute code 'weight' and entity_type_id 4. (Entity type 4 means products.) In my table, this is row 64. This means the weight attribute is attribute 64.

Now look at catalog_product_entity_decimal. This is where all decimal attributes for products are stored, and weight is a decimal attribute. All the rows having attribute_id 64 are weight values. The entity_id values correspond to the products.

Apart from unserious offerings, you can distinguish between cheaper domain-validated SSL certificates and the more expensive extended-validation SSL certificates (EV).

Both certificates are technically the same (the connection is encrypted), but domain-validated certificates are cheaper, because the seller only have to check the domain. The EV-certificates also require information about the owner of the domain, and the seller should check, if this information is correct (more administrative effort).

Normally you can see the difference when you visit the site with a browser. Firefox for example will highlight the domain in blue for domain-validated SSL, and green for extended-validation SSL.

In most cases the domain-validated certificate is fine, the user will have no disadvantages and the EV-certificates are really (too) expensive.

i just found that GoDaddy doesn't allow to "duplicates" certificate for your wildcards SSL.

That's a pitty since this is often used when you manage a farm of server and each one has its private key / csr.

(to compare, digicert do allow them, and unlimited number of them)

To be quite honest. there is absolutely NO difference when it comes to SSL certificates. The only contributing factor is the EV / non EV / Wildcard tags.

EV == Extended Validation: This means the site is actively " pinged " by the Certificate Authority on the provided IP of the domain, then a server-side script compares the IP address of the ping response from the CA, and the IP address YOU are visiting. This does NOT guarentee that there isn't a man-in-the-middle attack, or net-wide DNS poisoning. This just ensures that the site you are viewing is the same one the CA sees.

Non-EV == no one is actively checking the domain's IP against a logged / provided IP for security purposes.

Wildcard == *.domain.com based Certificates are often used when people have a multitude of subdomains, or a set of subdomains that are ever-changing, but still need valid SSL encryption.

The truth behind SSL Certificates.

You can make your own. They are no less secure than any other certificate. The difference being a " self-signed " certificate is not " vouched for " by any third party.

The problem with SSL Certificates is they are extremely over-priced for what they are. There is absolutely NO garentee that the site you are visiting belongs to whomever is listed on the certificate as owner / location etc. This defeats the purpose of the third-party-trust-chain model SSL was developed to use.

ALL Certificate Authorities known as CA's that sell their certificates, wants the user to believe that their certificate is somehow better. When in fact, they never check the information provided for the certificate unless there is an issue that may cost them revenue. This practice also defeats the purpose of the SSL trust-chain model.

I know of only ONE CA that indeed validates it's certificates. This is CACert.org.

For them to issue a " complete " certificate (business name, name, addres, phone etc..) you must meet one of their assurer's FACE-TO-FACE!.

However. most browsers do not use CACert.org due to pressures added to them by mega corporations like Thawte, Comodo, and Verisign.

So.. to sum it all up.

The only differences between certificates is the behavior of the CA. Certificates can't really be trusted to verify anything other than the connection to the site is useing encryption.

At the end of the day, people think paying $100 - $1000 somehow equates to trustworthiness. This is NOT the case. It just means you deal with less sophisticated or less established crooks.

Wednesday, April 9, 2014

Modern RAID controllers have integrated caches for increasing performance. With corresponding protective mechanisms, the content of these caches would be lost when a power failure occurs. For that reason, the cache content is often protected by a BBU or BBM (depending on the manufacturer, either the term Battery Backup Unit (BBU) or Battery Backup Module (BBM) is used). However, proper maintenance is required so that the BBU will actually work properly during a power failure. With such maintenance, complete data loss may be a risk during a power failure in the worst case.
Note: RAID controllers, which do not use a BBU to protect the cache (but instead copy the content of the cache to flash memory in the event of a power failure), do not require special cache protection maintenance (e.g. Adaptec ZMCP or LSI CacheVault).

Most RAID controllers that support Write caching, will not enable it without a battery backup pack. Imagine the damage a large 64 Megs of cached writes, not written to disk would do to a volume.

Without write caching, RAID5 controllers write performance drop by a factor of 5-10 times. (We had a Dell PERC 3 (The LSI, not Adaptec ones) that would write sustained at about 8 GB/hour with write cache off, but at 70-90 GB/hour with write caching on.

I do believe in using the batteries when available, but am not overly concerned if a server doesn't have one. In practice, I've noticed that the cached writes have a very short life in the buffer. They make it to disk surprisingly quick even on our heavily utilized servers. It also doesn't solve the issue of the writes/processes that were only partially supplied to the card from the app & OS. Does it help, yes, it will help minimize one particular case of data corruption. However, there's still a LOT of other places for it to go wrong during a power outage.

RAID controller cards temporarily cache data from the host system until it is successfully written to the storage media. While cached, data can be lost if system power fails, jeopardizing the data’s permanent integrity. CacheVault® flash cache protection modules and battery backup units (BBUs) protect the integrity of cached data by storing cached data in non-volatile flash cache storage or by providing battery power to the controller.

Lower total cost of ownership (TCO) with CacheVault technology by reducing hardware maintenance and disposal issues associated with lithium-ion batteries

Wednesday, April 2, 2014

In my scenario, it was important that only the members of the development team have access to the repository. We also chose to have the repository on a system separate from the actual web server and left it up to the web administrator to copy over files from the repository to the web server as he saw fit.

To accomplish this, start by creating a backup of the existing directory structure you wish to put under revision control, and send it securely to the repository server. In my case, I backed up the www data on the web server to an internal server at 192.168.2.2.

Next, on the repository system, create a new group called svn and add to it any existing user accounts that need access to the repository. For example, I added my existing web administrator as I created the group by running following command:

# pw groupmod svn -m webadmin

Then, create a new user called svn and, if necessary, any missing user accounts that need access to the repository. Make sure each account is a member of the svn group and has a password and a valid shell. I used sysinstall to create user accounts for the new web developers. When I finished, I double-checked the membership of the svn group. It looked something like this:

# grep svn /etc/group
svn:*:3690:webadmin,devel1,devel2

Dealing with umask

Before installing Subversion, take a close look at the existing umask for the svn user. On my FreeBSD system it was:

# su - svn
% umask
022

In Unix, the umask value determines the default permissions of a newly created directory or file. It does this by defining which permissions to disable. If you remember:

r = 4
w = 2
x = 1

you'll see that this umask doesn't turn off any (0) permissions for the user (svn); it turns off write (2) for the group (svn); and it turns off write (2) for world.

Because the members of the svn group should be able to write to the repository, change that group 2 to a 0. If you don't want nongroup members even to be aware of the existence of the repository, also change the world 2 to a 7.

The easy part is changing the umask for the svn user's shell. If it uses csh:

# su - svn
svn # vi ~svn/.cshrc
# A righteous umask
umask 027

Note: the meaning of each umask:
umask 002 // File permission 644. Owner can read/write. Group and Others can only read.
umask 007 // File permission 660. Owner and Group can read/write. Others can not read or write.
umask 027 // File permission 640. Owner can read/write. Group can read. Others can not read or write.

Note: I personally prefer to set umask 027. There is a security reason behind the thought. In order to prevent bad scripts trying to create new scripts or modify existing scripts on your server, you can have "svn update" running automatically in crontab. That will take care of source code update part. Then, you would want to make "svn" user to be the only user that has the write permission. www group users will only have read permission.

then find the existing umask line and change it to either 002, 007 or 027.

If your svn user has a shell other than csh, make your edit in your chosen shell's configuration file.

Repeat the umask command to verify that your changes have taken place.
Installing Subversion with the correct umask

If you chose a umask of 002, you can compile a wrapper into Subversion when you build it from the ports collection. If you chose a umask of 007 or 027, or prefer to install the precompiled version of Subversion, create a wrapper script to ensure that the Subversion binaries use your umask value.

Preparing Files to be imported
At that point, I untarred my backup so that I had some data to import. If you do this, don't restore directly into the /usr/local/repositories/proj1 directory. (It's a database, remember?) Instead, I first made a new directory structure:
# mkdir /usr/local/www/apache22/data/proj1
# cd /usr/local/www/apache22/data/proj1
# mkdir branches tags trunk
# cd trunk
# tar xzvf /full/path/to/www.tar.gz .

svn import is one of many svn commands available to users. Type svn help to see the names of all the available commands. If you insert one of those commands between svn and help, as in svn import help, you'll receive help on the syntax for that specified command.

After svn import, specify the name of the directory containing the data to import (proj1 or proj2). Your data doesn't have to be in the same directory; simply specify the full path to the data, but ensure that your svn user has permission to access the data you wish to import. Note: once you've successfully imported your data, you don't have to keep an original copy on disk. In my case, I issued the command rm -Rf www.

Next, notice the syntax I used when specifying the full path to the repository. Subversion supports multiple URL schemas or "repository access" RA modules. Verify which schemas your svn supports with:

Because I wished to access the repository on the local disk, I used the file:/// schema. I also appended www at the very end of the URL, as I wish that particular part of the repository to be available by that name. Yes, you can import multiple directory structures into the same Subversion repository, so give each one a name that is easy for you and your users to remember.

Finally, I used the -m message switch to append the comment "initial import" to the repository log. If I hadn't included this switch, svn would have opened the log for me in the user's default editor (vi) and asked me to add a comment before continuing.

This is a very important point. The whole reason to install a revision control system is to allow multiple users to modify files, possibly even simultaneously. It's up to each user to log clearly which changes they made to which files. It's your job to make your users aware of the importance of adding useful comments whenever an svn command prompts them to do so.

Edit /etc/rc.conf:
Just went through this (thank you) however I came across the issue where my FreeBSD box was just listening on tcp6. I'm using this internally on my network but without a tcp6 router this of course doesn't help. To make it work, I just modified my rc.conf to listen on host 0.0.0.0 (telling it to use tcp4). Also for anyone who wants it to start easily on boot, add this to your /etc/rc.conf (replacing the data dir, user, and group as necessary)

SVN Restore from hotcopy
Hotcopy should produce a usable file-level repository. You should be
able to use it as-is if the ownership and permissions are suitable. If
you are running a server, you may have to copy back to the location the
server expects or adjust the configuration to use the new location.

As you read those links, then you must realize by now that restoring a
hotcopy which is basically a copy of all your repository its easy just
copy to your Subversion scope, you may need to change owner
permissions and change your hook-scripts (only if you were using
hook-scripts) after this you will be ready to go.

Set the value of a property on files, dirs, or revisions:
# cd /www/drupal6/sites
# svn propset svn:ignore *.local .

Note: you should consider use "Edit a property with an external editor" instead.

Deciding Upon a URL Schema

Congratulations! You now have a working repository. Now's the best time to take a closer look at the various URL schemas and choose the access method that best suits your needs.

Chapter 6 of the freely available e-book Version Control with Subversion gives details about the possible configurations. You can choose to install the book when you compile the FreeBSD port by adding -DWITH_BOOK to your make command.

If all of your users log in to the system either locally or through ssh, use the file:/// schema. Because users are "local" to the repository, this scenario doesn't open a TCP/IP port to listen for Subversion connections. However, it does require an active shell account for each user and assumes that your users are comfortable logging in to a Unix server. As with any shell account, your security depends upon your users choosing good passwords and you setting up repository permissions and group memberships correctly. Having users ssh to the system does ensure that they have encrypted sessions.

Another possibility is to integrate Subversion into an existing Apache server. By default, the FreeBSD port of Subversion compiles in SSL support, meaning your users can have the ability to access your repository securely from their browsers using the https:// schema. However, if you're running Apache 2.x instead of Apache 1.x, remember to pass the -DWITH_MOD_DAV_SVN option to make when you compile your FreeBSD port.

If you're considering giving browser access to your users, read carefully through the Apache httpd configuration section of the Subversion book first. You'll have to go through a fair bit of configuration; fortunately, the documentation is complete.

A third approach is to use svnserve to listen for network connections. The book suggests running this process either through inetd or as a stand-alone daemon. Both of these approaches allow either anonymous access or access once the system has authorized a user using CRAM-MD5. Clients connect to svnserve using the svn:// schema.

Anonymous access wasn't appropriate in my scenario, so I followed the configuration options for CRAM-MD5. However, I quickly discovered that CRAM-MD5 wasn't on my FreeBSD system. When a Google search failed to find a technique for integrating CRAM-MD5 with my Subversion binary, I decided to try the last option.

This was to invoke svnserve in tunnel mode, which allows user authentication through the normal SSH mechanism as well as any restrictions you have placed in your /etc/ssh/sshd_config file. For example, I could use the AllowUsers keyword to control which users can authenticate to the system. Note that this schema uses svn+ssh://.

The appeal of this method is that I could use an existing authentication scheme without forcing the user to actually be "on" the repository system. However, this network connection is unencrypted; the use of SSH is only to authenticate. If your data is sensitive, either have your users use file:// after sshing in or use https:// after you've properly configured Apache.

If you decide to use the svnserve server and you compiled in the wrapper, it created a binary called svnserve.bin. Users won't be able to access the repository until:

# cp /usr/local/bin/svnserve.bin /usr/local/bin/svnserve

That's it for this installment. In the next column, I'll show how to start accessing the repository as a client.

Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.

Tuesday, April 1, 2014

$conf = array(
'reverse_proxy' => TRUE,
'reverse_proxy_addresses' => array('192.168.0.6', '192.168.0.7'), // Filling this array Drupal will trust the information stored in the X-Forwarded-For headers only if Remote IP address is one of these, that is the request reaches the web server from one of your reverse proxies.
);

Using a load balancer or reverse proxy

When running large Drupal installations, you may find yourself with a web server cluster that lives behind a load balancer. The pages here contain tips for configuring Drupal in this setup, as well as example configurations for various load balancers.

In addition to a large selection of commercial options, various open source load balancers exist: Pound, Varnish, ffproxy, tinyproxy, etc. Web servers (including Squid, Apache and NGINX) can also be configured as reverse proxies.

The basic layout you can expect in most high-availability environments will look something like this:

Browser

──→

HTTP Reverse Proxy

┌─→
──┼─→
└─→

Web server 1Web server 2Web server 3

↘
→ Database
↗

By way of explanation:

Browsers will connect to a reverse proxy using HTTP or HTTPS. The proxy will in turn connect to web servers via HTTP.

Web servers will likely be on private IP addresses. Use of a private network allows web servers to share a database and/or NFS server that need not be exposed to the Internet on a public IP address.

If HTTPS is required, it is configured on the proxy, not the web server.

Most HTTP reverse proxies will also "clean" requests in some way. For example, they'll require that a browser include a valid User-Agent string, or that the requested URL contain standard characters or not exceed a certain length.

In the case of Drupal, it is highly recommended that all web servers share identical copies of the Drupal DocumentRoot in use, to insure version consistency between themes and modules. This may be achieved using an NFS mount to hold your Drupal files, or by using a revision control system (CVS, SVN, git, etc) to maintain your files.

High availability

In order to achieve the maximum uptime, a high-availability design should have no single points of failure. For network connectivity, this may mean using BGP with multiple upstream providers, as well as perhaps using Link Aggregation (LACP) to maintain multiple physical network paths in your LAN. In the diagram above, the two server elements that need attention are the load balancer and the database.

A load balancer cannot easily be "clustered" because a single IP address usually needs to apply to a single machine. To address this issue, you may wish to read up on CARP(FreeBSD) and Heartbeat (Linux).

A database server generally needs access to a single repository of data. Various technologies exist to address this, including MySQL NDB and PgCluster. If you're willing to accept the possibility of less than 100% up-time while you recover from broken hardware, you should consider using transactional database replication to keep a live copy of your data on a secondary server. Read the documentation for your database server software to find out how to set this up.

Needless to say, always set up regular automated backups.

Note:

If you plan to install Drupal 7 on a web server that browsers will reach only via HTTPS, there's an outstanding issue you'll want to check (#313145: Support X-Forwarded-Proto HTTP header). At this time, Drupal's AJAX callbacks use URLs based on the protocol used at the web server, regardless of the protocol used at the proxy. Your workaround is either this patch, or to set the "reverse_proxy" variable manually in your settings.php file. Unfortunately, as the Drupal installer relies on AJAX, your only other option is to install via HTTP instead of HTTPS.