Main

Dynamic DNS updates not happening at boot or when doing an ipconfig release/renew. But manual ipconfig /registerdns works fine. Tracked this down to IPv6 being disabled by GPO. I don’t know the reason for this. but Microsfot do state:

Important Internet Protocol version 6 (IPv6) is a mandatory part of Windows Vista and Windows Server 2008 and newer versions. We do not recommend that you disable IPv6 or its components. If you do, some Windows components may not function. We recommend that you use “Prefer IPv4 over IPv6” in prefix policies instead of disabling IPV6.

Reboot is required for the change to take effect (at least for the dynamic registration behavior to change)

De-scoping this policy (or setting it to not configured) doesn’t revert IPv6 to enabled. Instead you need to configure the setting “Enable all IPv6 components” and then de-scope the policy after the change as taken effect

Ticking “Use this connection’s DNS suffix in DNS registration” causes dynamic registration to work as normal, even when IPv6 is disabled. In my testing the primary and the connection specific DNS suffix was the same

Consider using ‘Prefer IPv4 over IPv6‘ instead of disabling IPv6 as this does not impact Dynamic DNS updates

I’m only talking about DHCP clients here not static clients. The behavior may be different for static clients

In case there was any doubt that you really can find the most ridiculously obscure information on the internet….here are the pinouts for the Japanese Panasonic Strada CN-MW250D Navi Headunit (and probably most other models in the range). With a USB extension cable, some header cables and a soldering iron you can make your own cable. The unit works fine with a 16GB flash drive. Not sure what the max size is.

I had an issue with the motor occasionally not starting, it would just hum. I replaced the capacitor and the problem seems to have gone away.

I’m not 100% sure what kind of induction motor this is. When I opened it I excepted to find a centrifugal switch inside but I didn’t. Is it a permanent split capacitor motor? Please comment if you know how it starts/runs. I made this simple wiring diagram. (To the best of my understanding).

I’ve been working on this project on and off for a few years. It started off as a simple restoration of a second hand Italian espresso machine which quickly got out of control, as most of my projects seem to do. Here’s a video showing the finished project and then a bunch of photos showing the build. I should have done the video with the camera turned the other way, sorry about that but I couldn’t be bothered re-doing it.

The Brasilia ‘Lady’ is a very simple single-group, single boiler machine. It has a 300ml brass boiler with a 3-way solenoid valve. It has a simple bimetallic thermostat which means the temperature swings wildly (although some models do have more complex thermostats). My model had no micro controller and was purely AC driven and controlled by the buttons on the front and the thermostat. The machine is in some ways very similar to the popular Silviamachine.

When I started restoring the machine I quickly decided that I wanted to do a PID modification to maintain a constant temperature. At the time I had just started playing around with Arduino so I thought why not just take all of the AC buttons on the front down to an Arduino and control everything through software with solid state relays for the pump, boiler and solenoid. The pictures and captions below should explain each part of the build sufficiently.

TLDR: Final assembly photos are at the bottom of the post.

Machine Housing

This is how the machine started out. This isn’t actually mine but I didn’t take a photo before I started. Mine was in much worse condition.

My parents were staying with us a few weeks ago. Dad and I had talked about building an outdoor bench for a weekend project but hadn’t been able to source any suitable timber. The next day Dad came across some wood being thrown out on the side of the road while taking a morning walk. It was from an old house that was being renovated. We took the station wagon down and picked up as much as we thought we’d need. We actually ended up going back again for more, this time the builders were there and they let us go around the back and pick through what they had.

To the untrained eye this would probably just look like a pile of rubbish on the side of the road but it is in fact gorgeous Rimu. A New Zealand native tree which takes ~450 years to grow to maturity.

I wish I’d taken more photos but after much de-nailing, planing, cutting, drilling and screwing this is what we came up with. To be fair, Dad did most of the planing. All by hand using a trusty Stanley No.7

After gluing and screwing the top planks down I used a belt sander to flatten and smooth the top. The screw holes were filled with plugs cut out of Rimu using a plug cutter.

The bench was coated twice with a basic decking oil to protect it.

Free wood + many hours of manual labor = one mighty solid Rimu bench! The design is based on this providence bench.

I made this small brass hammer to tap things down on the parallels in the milling vice. I may also use it for driving chisels. The idea of a stubby but weighty brass hammer was inspired by David Barron’s brass hammers. I like his a lot more but overall I’m happy with how it came out and it was a good chance to combine lathe, mill and wood work.

I designed the handle in sketchup, printed the design, then cut it out and traced around it. I then cut it out with a coping saw. My wife finished the handle with a dye stain and Briwax. She has way more skill and experience with wood finishes than I do.

Here’s my first attempt at making something with the mill. Also my first attempt at using the WordPress mobile app to post. If this is easy maybe I’ll post more regularly. Micro-blogging I think they call it?

This just started out with milling into the body of an old HP harddrive to get a feel for the machine. Very high grade aluminium (I presume). Very easy to mill.

After a while this is what I came up with.

It’s an adjustable bracket. I think. For what? I’m not sure.

This is very exciting. The only limit is imagination…..I imagined an adjustable bracket. Oh that’s really sad!

I’ve recently acquired a couple of significant additions to my workshop so I thought it was about time I did a workshop update post.

These two new additions are one Sieg SC4 metal lathe and one Sieg SX3 milling machine. The latter of these two weighs 165Kg (363lb). I didn’t like the idea of having to move this machine more than once so I thought it was appropriate to build a sturdy bench on castors which I could put the machine straight onto upon receipt.

As with most of my projects, the bench ended up being needlessly over engineered! I conclude that I have Inceptionitus. I.e. ‘Once an idea has taken hold of the brain it’s almost impossible to eradicate’.

In this case the idea stemmed from the thought that eventually I would convert this milling machine to CNC therefore I’d want a way to contain swarf and cooling fluid. Initially I considered forming a tray using some kind of rubber lining like butyl rubber membrane, but I was concerned about chemical resistance. Then it occurred to me that a moulded fibreglass tray would be a ‘simple’ solution to the problem. And that was it, the idea had taken hold. Of course it’s much easier to conceive an idea than to complete a project! Let’s begin…

First step – Get the dimensions of the machine and create designs in Sketch-up.

Mould design for the fibreglass tray (Inverse of the final product)

Overall bench design

I decided to make the fibreglass tray first so that I could adjust the bench design to accommodate for any inaccuracies in the fibreglass which once set can’t be adjusted.

The mould was made from:

9mm melamine board

Scraps of cupboard door chipboard laminate

20×45 Pine cavity batten

Plasticine

Duct tape

Polyester builders/auto filler

6mm MDF

This was a stupid way to make the drain hole. If I did it again I’d use the top section of a soft drink bottle or similar.

I did a few test pieces of fibreglass to get a feel for the gelcoat which I hadn’t used before. And to work out the best tape to use. I found that the resin sticks least to the duct tape. Masking and packaging tape were also fine but the heat from the curing process made them wrinkle.

Plasticine is used in the corners to give a radius. Fibreglass doesn’t like bending at 90 degrees.

I put tape over the plasticine on the long stretches because it gives a nicer finish than the plasticine.

Two coats of gelcoat brushed on with a a couple of hours between coats and left to cure for a few hours.

After the gelcoat I laid down a single thin layer of fibreglass re-enforcing with polyester resin and allowed it to fully cure and then sanded it. Doing this (as opposed to adding multiple layers straight away) significantly reduces the chance of getting air bubbles between the gelcoat and the fibreglass.

After this three more layers of resin and chopped-strand-mat reinforcing was added. I ended up with a few air pockets in these layers but nothing too major. Perhaps I should have let each layer to set up at least a little before adding the next.

Edges cut flush with a diamond blade on the angle grinder then smoothed off with a normal disc.

I naively thought I would be able to remove it from the mould without disassembly. Yeah right!

As expected there are lots of moulding artefacts but overall I’m very happy with it. Especially considering it was a fairly rushed job over one weekend and a few week nights.

Now on to the bench itself…

The bench design is based on the same torsion box design as my main workshop bench. Pretty overkill but I still had a lot of plywood offcuts from the workshop restoration that I needed to use up. There are castors on each of the four corners rated at 280Kg each. The centre foot jacks down with three 16mm bolts.

I was able to recycle wood from the fibreglass mould to make the fixing blocks for the bench top.

I used auto filler on the rough edge of the tri-board to give it a smooth seald finish.

And now for the best part…the install.

On the left: Sieg SX3 milling machine. On the right: Sieg SC4 lathe

Thanks

Special thanks to my wonderful wife who painted the bench, to Simon for the help and use of the engine crane. And to Andy for the offer of the engine crane. To Chris at www.sieg-machines.co.nz for all the help. And to Adam who transported the machine.

Contouring the tray and drainage area to make sweeping swarf away easier

Take more care on the mould to reduce the moulding artefacts

I’d consider making a reusable fibreglass mould so I could make more trays and sell them – although I suspect the market in NZ is fairly small

I probably wouldn’t bother with the jack’able middle leg and just use casters instead. The cost of the large bolts wasn’t much cheaper than the casters in the end. Arguably it provides more strength to the structure right under the machine. Arguably the middle leg is overkill and not needed at all!

Thanks for stopping by, hopefully this gives you some ideas if you’re thinking of doing something similar.

Here’s my second dovetail box. I made this for my Dad for his birthday. There were three things I wanted to achieve on this project:

Flush Look
On the last box I had a slot for the lid to slide in. This works just fine but I really wanted to make a box which was completely flush all the way round.

Opening/Closing Dovetail
At some point during all my dovetail practice I produced a dovetail joint which wasn’t tight or loose and was able to open and close snugly. This gave me the idea of a dovetail joint as part of an opening/closing mechanism.

BrassI’ve been largely focusing on my woodworking skills but ultimately I’m at the beginning of a long journey to build up a wide range of skills across many materials and techniques – not just wood. To kick things off I wanted to include an element of bass on this project. I decided to keep it simple and add a little bass handle for opening the lid.

I haven’t been doing too much lately that’s blog-worthy. Partly because I’ve been bitten by the woodworking bug and I’m only in the early learning stages so I don’t have much to show for my time yet. I decided to start off by learning to do dovetail joints. I figured that would require me to practice a number of basic woodworking skills – accurate marking, sawing to a line, chiseling etc.

I was also inspired by watching David Barron’s dovetail videos. Thanks so much for sharing, David! I must have watched the videos a dozen times to try and pick up techniques. I also made a magnetic dovetail guide like the ones which David designed. Check out his fine furniture site and blog.

These are my efforts so far. In the order which they were made (left to right, back to front). Some cut with the guide, some just marked up and cut by hand.

These are the tools that I use (although I didn’t have them all at the start).

….And after 27 practice pieces I was producing consistent enough dovetails to make this little box. For my first woodwork project since high school I’m very happy with how it came out. The wood is recycled New Zealand Rimu, top and bottom are 4mm Okoume Plywood.

This is an idea I’ve been meaning to try for a while. I wanted to install magnetic reed switches on our kitchen windows for the security alarm. The kitchen is the most likely place where an intruder would try to enter because the window is low and out of sight. The good thing about reed switches is that the alarm will activate as soon as the window is pried and before any one actually gets inside the house.

I didn’t want to use the bulky white plastic reed switch/magnet combination sets. I wanted the install to be completely invisible using glass reed switches. Here’s how I went about it.

Kitchen windows before starting the project.

I first lifted a couple of tiles on the roof above the window so I could see what I was dealing with and to make sure there weren’t any mains cables near where I would be drilling. I drilled holes for each lead of the reed switches. These had to be drilled up at a slight angle facing away from the house so as to avoid the header above the window frame. I then pushed up some draw wires.

Using a chisel I cut a slot just slightly deeper then the thickness of the reed switches.

I soldered in and heat-shrinked the reed switches. My Weller Pyropen is brilliant for this kind of work.

Once the filler has hardened it is sanded and then a thin then layer of wood filler is added to make sure the surface is flat and smooth.

Finally the surface is sanded, primed and painted to restore the window frame to it’s original state.

The only thing left to do was to add a small neodymium magnet to the top of both window sashes. I first used double-sided tape to position the magnet and find out where it needed to be glued, then glued it in place with epoxy.

Done! Totally invisible reed switch install. Hopefully this is useful to someone. It’s really not much harder than doing an install with plastic units and looks much tidier. Most of the work is in running the cable to the right place which you have to do either way.

I’ve just had VDSL2 installed at home and have been setting up the new Telecom supplied Technicolor TG589vn V2 Modem. The previous modem I had, the Technicolor TG582n has some great functionality if you don’t mind diving into the CLI (and the accompanying 800+ page CLI guide!). The TG589 is no different – a basic user friendly web GUI, backed with a much more powerful CLI.

I haven’t been able to find the CLI guide PDF for this model yet but the command set is largely the same. One notable exception is the conditional DNS forwarding configuration which has given me some trouble. The TG582n had a set of commands for ‘DNS routing’ e.g.

*Update*I’ve now got a copy of the manual. linked below (Thanks Dennis and Phill)

dns server route list and dns server route add.

This has changed in the TG589 to DNS ‘forwarding rules’ and DNS ‘server sets’.

So…We have a list of DNS servers in a dnsset which are used in order of metric (lowest metric is used first). Then we have a set of rules as to which dnsset to use in what circumstance. The rules can match on client address, DNS domain, source interface etc. it’s quite flexible.

By default there is one dnsset and one forwarding rule. The default dnsset is set ‘0’ and is typically populated with your regular ISP DNS server. The default forwarding rule has a rule index of 999 and basically says if no other rules match then use dnsset 0.

At home I want the modem to act as the DNS server for all public internet addresses but I want queries for names on my home domain to be forwarded to my Samba domain controller/DNS linux box. Doing this means I can reboot the linux box without loosing internet access.

Add the rule to forward any queries for home.rhysgoodwin.com to be forwarded to dnsset 10.dns server forward rule add idx=20 set=10 domain=home.rhysgoodwin.com

Finally– the bit that tripped me up for some time was the DNS server ‘response filter’ config option. I’m not sure technically what this options is for but I had to disable it before the forwarding world work.dns server config filter=disabled

In step 3 where you create the rule these are parameters:

idx the index or id of the rule. I think it also implies the order of the rule. Lower index will be matched first.

set is the number of the dnsset to use if this rule matches.

domain this is the domain name to match on. If specified then the rule will only apply to queries for names on this domain. If you leave it blank then the rule will apply to all DNS queries (which match the other parmeters of the rule)

intf which takes an interface name e.g LocalNetwork, PPPoE, PPPoA etc. If specified it means that the rule will only apply if the DNS query comes in on the specified interface.

source which takes a CIDR network address e.g 192.168.22.0/24 for my entire local subnet or 192.168.22.50/32 for a single IP address. If this is specified it means that the rule will only apply if the DNS query is coming the specified address. This is useful in cases where you one pariticular device on the network which you want to use different DNS servers for.

To Delete a forwarding rule you must specify the index number exactly like this:dns server forward rule delete idx 20

A couple of months ago I bought a secondhand Yamaha receiver (RX-V371). My plan was to finally do away with infrared blasters stuck to the outside of all my home theater gear. The plan was to do all the control with HDMI CEC. I was already controlling my TV using the Kwikwai HDMI CEC adapter which I reviewed a couple of posts ago.

Unfortunately I haven’t been able to find the necessary Yamaha HDMI CEC commands to control things like surround mode and DSP settings. Chances are these commands just don’t exist. I did email Yamaha but they didn’t even respond. The most I could get out of CEC on the Yamaha was power on/off and input select.

I couldn’t bring myself to stick an IR blaster to the beautiful face of this fine receiver. Equally unappealing was the idea of shelling out cash for a receiver with Ethernet or RS-232 control.

Here is my compromise – putting the IR blaster inside the receiver. It’s not rocket science but here you go:

After locating the IR sensor I removed the self-adhesive backing on the IR blaster and stuck it to the PCB with the aid of a bamboo skewer.

The only thing left to do was to cut the the 3.5mm mono plug off the blaster, pass the cable through a small hole at the back of the receiver and then re-attach the 3.mm mono plug using a soldering iron and some heat-shrink tubing.

The Infrared emitter (blaster) is connected to a Microsoft USB Infrared Receiver/Transmitter. I’m using my own home brew c# .NET application to do the automation but there are a number of options, Girder, HIP, EventGhost etc.

The result – a reasonable level of control, no ugly IR bug visible (I can’t even see it flashing) and zero cost.

Yes the bench is built like the proverbial brick house. Several people have made reference to an earthquake or bomb shelter.

I installed a tub with removable insert to save bench space. (Good call Niten!) Not sure when I’ll to get this plumbed in.

3mm steel galv plate for the metalworking area.

I wanted the plate to sit flush with the surface of the bench so I routed out 3mm across the surface where the plate goes. 25mm per pass. Sucker for punishment? Perhaps. Actually the top was ok – the front – don’t ask. This was done with a Makita RP1800 Router. ‘Like a hot knife through butter’.

Over the last few months I’ve had the opportunity to play with a very cool toy and thought I’d take some time to share it here. The Kwikwai is a powerful little tool made by Swiss company Incyma. It enables complete access to the HDMI-CEC bus. If you haven’t heard of HDMI-CEC it’s probably because it’s normally re-labelled by manufacturers. Anynet+ (Samsung); Aquos Link (Sharp); BRAVIA Sync (Sony); VIERA Link (Panasonic) etc.

CEC stands for Consumer Electronics Control and it allows various home entertainment components to talk to each other. For example when you switch on your Blu-ray player your TV and amp will turn on and switch to the correct inputs. Or when you turn your TV off the other HDMI connected devices will also turn off.

While this might all sound great in theory, in practice it can be a hit and miss. Different manufactures implement their own flavour of CEC and devices from different manufacturers don’t always play nicely together.

My interest in CEC was not so much in the interaction between devices and more in direct control and automation of each individual device using my HTPC. In fact I don’t even have a blu-ray player or set-top-box. Everything is done through the HTPC. I have a bit of an obsession with having a single remote to control everything with as few buttons as possible. Anyone should be able to pick up the remote, press power and be presented with an intuitive interface (in my case MediaPortal).

While there are plenty video cards that offer HDMI they don’t yet offer communication on the CEC bus. That’s where Kwikwai comes in.

On the front there are 4 indicator LEDs and two HDMI ports which allow the Kwikwai to be placed ‘in-line’ between two devices e.g. Blu-ray player and TV. It doesn’t matter which device connects to which port since the Kwikwai is completely transparent to the devices connected to it. You don’t have to connect it in-line you could just connect to any spare HDMI port on your TV or Amp – everything that goes onto the CEC bus is broadcast across all ports.

On the rear of the Kwikwai there are three connectivity options. Ethernet, RS232 and USB. The USB interface is used for power and also for communication (via USB to RS232). You can power the Kwikwai either from your PC or from any other 5V USB power supply. I’ve really only used the network interface so far.

The Kwikwai is not only great for home theatre automation it’s also a powerful HDMI CEC diagnostic tool and that’s the primary use for the web interface which can be accessed by pointing your browser at http://kwikwai.local

While the web interface provides diagnostics, configuration, and a firmware update facility it’s not ideal for automation. For that we can either use the command line directly or use the API for developing custom software. Most HTPC users will opt for using the command line but if you’ve got some basic c# .NET skills using the API is quite easy.

There is also some sample Python code on the Kwikwai website which would make it pretty easy to implement an Eventghost plug-in, however I was able to get the Kwikwai working in Eventghost by using the existing ZoomPlayer plug-in which allows simple RAW TCP commands to be sent.

Simply enter the Kwikwai address and port number

To send commands to the Kwikwai create a new Zoomplayer ‘Raw Command’ action. For example:

cec:send A FF:36 This will broadcast the ‘Power Off” command to all devices on the CEC bus.

The command syntax can be found on the Kwikwai web site here. And the CEC-O-MATIC is a great online tool to help you build up CEC commands.

Conclusion

The Kwikwai is a very handy device which enables easy automation of home entertainment components without the need to stick ugly infra-red senders to your equipment. At the moment it can be hard to get hold of vendor specific commands to perform more complex control but hopefully that will change over time.

There are two Kwikwai models available. For a full diagnostics solution the K-100 is the more expensive model. For automation the more basic K-090 will be more than adequate. Both models include all the connectivity options.

The only two areas I can see room for improvement in the Kwikwai are:
1) The colour! The Kwikwai looks kind of cool and is very well built but it doesn’t blend in very well with most home theatre gear.
2) It would be nice to see a firmware update that enables the Kwikwai to emulate a ‘player’ device on the HDMI bus so that other devices could become aware of it.

…And I’m back. Yep, it’s been a while since I shared anything much here. That’s partly because I’ve been spending so much of my free time converting my decrepit old garage into a tidy workshop, a project that I started just over a year ago.

It’s been one of those of projects that starts out as a small seed of an idea, something that will take just a few weeks but then grows one “If I’m going to this I might as well do that” statement at a time until it carries on for an entire year. In project terms it’s clear that I failed to define the requirements and scope up front!

In case you don’t make through to the end this very long set of photos I’d like to say a big thanks at the beginning of the post to:

My wonderful wife who has not only put up with me spending so much time on this project over the last year has also done ALL the painting that you’ll see below

My good friend Rick who helped me through all the electrical work and made sure that I didn’t burn the workshop, the house or myself to the ground

My good friend Simon who helped with external weather proofing.

The guys up at Hill Lumber in East Tamaki for their advice and patience for a total newbie who couldn’t even tie down a trailer on my first of many visits. If you’re looking for great timber and building materials at the best prices around check them out.

The old garage which is 3.6m x 7.2m was built at the same time as the house in 1956. It has a side entrance and main entrance, which opens out into the carport, which opens out onto the driveway. Having the carport for the car meant that I could convert the old garage into a workshop for anything from woodwork to metalwork, plastics, electronics etc.

Inside, looking out through the carport. Before starting the project.

The old workbench

My initial intention was just to replace the rotten framing and line the interior with ply. Water had been running under the door when it rained heavily and would flow to one side. Consequently, the bottom plate and the first ~150mm of the most of the studs down one side had pretty bad rot. Water had been coming in the top and around the sides of the window on the back wall resulting in yet more rot. The right-hand side (which has the side door on it) was pretty solid.

The first step was to clear out the bottom plate. I used a couple of the redwood planks for the old workbench to prop up the wall under the top plate. Most of the bottom plate cleared out easily because it was so rotten. I used the angle grinder to cut off the old steel anchor pins.

Old bottom plate all cleared out

The next step was to put in the new bottom plate using dynabolts and with a strip of damp-proof course to prevent moisture in the concrete slab from being absorbed into the wood.

With the new bottom plate it was time to sister the rotten studs with new ones.

This all went well and I worked my way along the left hand wall until I reached the first window, at which point I stood back and admired my handy work and for my first ‘building’ project I was pretty happy. It all looked solid and reasonably straight and I thought since I’d come this far I really should replace the old rusty louvre window. I picked up a second hand aluminum window off TradeMe.

Next I moved on to the back wall and back window. This time I had to:

Take care of the partially rotten top plate by re-enforcing with a sub-top plate

Install new studs and remove the old window and rotten diagonal framing

Install a new window frame and window – again I managed to find a second hand aluminum window that was about the right size.

I came across this excellent site which describes how to correctly frame a rough opening for a window.

At this stage I had dealt with all the rotten framing and had a generally sound building. I figured since I’d come this far I should really doing something about the very pitted rough stained floor. In the end I settled on getting a guy in to grind, patch and lay two coats of epxoy. Oh and while I’m at it I might as well install a secondhand roller door.

I was averagely happy with the floor. There are a lot of grind marks and there were a few other issues but I won’t go into that. It’s about 1000% percent better than it was. Finally it was time to start lining. Or was it? As I surveyed the project so far I figured it only made sense to line the ceiling as well as the walls, and if I was going to line the ceiling it would be a shame to miss the opportunity to install insulation.

Of course before I could start any of the lining I had to consider wiring – power points, lighting etc. With a whole new set of electricals I should really install a new main cable back to the house to replace the 50+ year-old one that was there. That task lead me to cut a trench across the path between the workshop and the house. And let’s face it, while you’ve got a trench open you’d be silly not to lay network cables back to the patch panel in the house along with a pipe for water supply. Right?

Did I go overboard on the Ethernet?

I had to add additional framing to support the ceiling panels

696 Watts of fluorescent lighting. Switched in 3 sets of 2.

And finally on to the wall lining and switchboard.

All that’s really left is the workbench and I’ll put that up in another post (hopefully) soon!

Lamps – Don’t touch the glass part of the lamp
I only discovered today when I went down to Repco to get a replacement lamp for my dead headlight that my car takes HID lamps. Both Repco and SuperCheap Auto gave me a cost estimate of around $250 NZD per lamp! And if I wanted the colors to match I would really have to replace both lamps at the same time.

$500 for 2 lamps!? No thanks! So I looked on TradeMe and found lamps for $49/pair. I’m not sure if this is a “get what you pay for situation” or an “HDMI cable scam” situation. Either way I thought it was worth a punt so I picked up a pair. Here’s what I got:

The box indicates D2C but the seller assured me that they were a suitable replacement for my D2R lamps and of the highest quality!

Old and new (Left and Right)

I’ll report back in a few months as to how well they’re going/lasting.

Replacement

Well this is basic stuff but if haven’t done it before and you were expecting it to be a 5min job requiring no tools then this will really help. Having said that, if you’re lucky this might still be a 5 minute job.

The HID lamp is behind the gray cover. Try to turn it anti-clockwise to release it. If you’re lucky it will turn and come off, and you can proceed to the lamp replacement section. If it won’t turn (as mine didn’t) then it’s because there’s a security torx screw at the bottom of the gray cover preventing it from turning and unlocking. The easiest way to deal with this is to remove the headlight unit which is fairly easy.

Disconnect the 2 accessible cables/plugs at the back of the unit. Remove 3 bolts as shown and wiggle the headlight unit out. When it’s out disconnect the other 2 cables.

Remove the screw and rotate the gray cover anti-clockwise

Press the metal tabs on either side to remove the metal cap.

Unlock and remove the high-voltage cable to expose the lamp.

Unhook the wire spring clip to release the lamp.

Carefully install the new lamp. Don’t touch the glass with your fingers.

Intro

In an effort to better manage our finances I decided to ditch my self-written ASP.NET budgeting tool and adopt GnuCash, an excellent open source accounting application. As well as being a true double entry accounting system, one of the great things about GnuCash is its ability to import a set of transactions in various formats. The idea here is that you import an OFX or CSV from your bank and allocate transactions to various accounts.

After almost 4 years of manually entering every single transaction into my crappy home-grown tool I was on the verge of giving up altogether. I decided that whatever new system I went with would need to be as automated as possible. So partly for the challenge and partly because I’m efficient (lazy) – I decided to automate downloading of transaction files from my bank accounts at Kiwibank.

Now it would be really nice if KiwiBank provided a webservice API to pull these transactions down – of course that would be too good to be true. With an API ruled out that only leaves the front end.

The first option I looked at was a Python based web scraping tool called Scrapy. It’s a really flexible powerful tool for parsing html. As I started getting a grip on the syntax of Scrapy it became clear that it wasn’t going to do the job due to the JavaScript-heavy interface that Kiwibank uses.

The second option was browser automation. To me this seemed like a less elegant option but after finding Selenium I soon forgot about that. Selenium is a web testing and automation suit. It consists of a number of components including a pretty extensive set of development libraries and interfaces. The two tools I used were Selenium Server and Selenium IDE (Integrated Development Environment) for Firefox.

Selenium IDE

Start off by creating a new test suit and then a new test case within that suite. Hit the record button and start recording your browser session. Every action you perform in the browser will be recorded as a step in the script. This will give you the basis for the automation. Once you’re done recording you might need to manually edit, add or remove some steps to make the script more robust, or fix bits that don’t play back correctly. You can play the script back with the buttons on the toolbar or you can execute one step at time by selecting the step and pressing ‘x’.

Another extremely useful tool to help analyze page elements is Firebug for Firefox, it’s an excellent compliment to the Selenium IDE.

Getting Creative with Kiwibank Security

In an attempt to make their site more secure Kiwibank employ a two step authentication process. The first being AccessNo./Password and the second, a question/answer system which asks you to click the missing letters from the answer. This adds a slight level of security because it means an attacker needs to have a logger that’s a little more extensive than just logging keys.

Now it’s probably possible to get Selenium to read the question, work out which letters are missing and look up a table to determine which JavaScript should be called to complete the answer. And I may end up having to do that if Kiwibank reads this post! But fortunately for me Kiwibank allows you to set your own questions and answers. The questions all have to be different but the answers don’t. Simply setting all the answers to the same five letters means that I always call the same JavaScript.

To be honest it felt good the be the user, circumventing the security for a change!

Custom JavaScript

Selenium allows you specify a file with your own JavaScript functions. The file must be named user-extensions.js. It’s location can be configured in the IDE under options/options. I don’t think these scripts can interact with elements on the page though. Someone please correct me if I’m wrong here.

I created a custom JavaScript function that returned the current date less x number of days given as a parameter.

I used this function to get the last 28 days when specifying the “from date” on the export selector.

Auto-downloading Files & Firefox Profiles

The whole purpose of this exercise is to automate downloading of transaction files so we need to tell Firefox to automatically save files of a certain type instead of prompting. We’d also like to save them in a specific location.

The best way to handle this is create a custom Firefox profile for Selenium to use just for this automation. There’s a great post here which details the optimum profile settings for use with Selenium.

The last thing you’ll need to do to the profile is make sure that it handles your chosen export file type correctly. In my case I’m using .OFX so I needed to tell Firefox to always download .OFX files without prompting. This is done through the mimeTypes.rdf file in the profile. Details on this file here.

If you keep getting the add-ons popup every time you use the custom profile I found the following fix:

To disable add-ons window which appears every time when Selenium scripts are run on Custom Firefox Profile. Close all instances of Firefox browser and delete the following files from the Custom Profile folder

extensions.cache extensions.ini extensions.rdf compatibility.ini

This should reset Extension Manager and disable add-ons pop-up.

Selenium Server

Now with a fully working script and customFirefox profile in hand we can set about scheduling this automation with the Selenium server and the Windows task scheduler. The Selenium server would normally be stated and left running like any other server application. In our case we’ll just start it, run our script and then exit.

Once you get the command working at the command prompt you can then use it in a scheduled task running under it’s own user account. If you do this, everything will run in the background and you won’t see any windows pop up and it will run even if no one is logged on to the PC.

Now for something completely different….after shelling out for one of these awesome microchip cat doors to be installed I thought I’d DIY the opening in my security mesh door. – As the weather warms up hopefully I’ll get more DIY stuff up here.

Mark the bars you'll need to cut to get make a square just a little bigger than the cat door opening

Cut where you marked using some hefty bolt cutters - a good excuse to buy tools, it would cost more to get a guy out to do the job, right? For now only cut the bars not the screen.

Make up 2 "picture frames" using mitre saw. The inner dimension should be about the same size as the cat door opening or just slightly bigger. I glued and tacked mine together with small nails.

After assembling the frames prime and paint to make them weather proof

On one of the frames check and mark where you can put 4 screws through without hitting bars.

In one of the frames drill screw holes

Line up the 2 frames and screw the screws through into the frame you didn't drill

Put the screws into the drilled frame and push through the mesh screen to hold it in place. Cut out the mesh, and put some dabs of glue to hold the mesh

After installing and configuring OpenAM you’re unable to log on to the admin console with the amAdmin account and password you set during the install. It doesn’t give an error message, just drops you back to the login page.

Cause

When you go through the custom configuration wizard you get asked for the cookie domain. If your OpenAM server is openam.mydomain.co.nz then your cookie domain should be .mydomain.co.nz but by default the wizard just takes the trailing two domain components from the server name – i.e. .co.nz. Unless you specifically set the cookie domain correctly you’ll get the issue described above. As you can imagine this issue wouldn’t occur if your OpenAM server was called openam.mydomain.com.

This means that if you have a domain name with more than 2 domain components then you’ll always need to run the custom config wizard.

2 domains in 2 forests with a one way trust between them.
(For this post I’ll refer to these domains PERIMETER and INTERNAL)

PERIMETER trusts INTERNAL but INTERNAL doesn’t trust PERIMETER

Both PERIMETER and INTERNAL contain user accounts that need to be authenticated and federated via ADFS

The ADFS server is joined to the PERIMETER domain

ADFS and its related IIS services need to run under a service account from the INTERNAL domain

Here are the high level hoops I had to jump through to get this working:

On a clean Windows 2008 R2 server, obtain and run the ADFS 2.0 setup file AdfsSetup.exe. Select “Federation Server”, This will install everything you need to make ADFS 2.0 work (including pre-requisites). Don’t run through the config wizard – We will do the config from command line later.

Create a new service account. e.g. INTERNAL\Svc.ADFS. Create a new DNS ‘A’ record and point it to the ADFS server. E.g. federate.internal.com. Set a Kerberos SPN for the DNS record against the service account:

setspn -a HOST/federate.internal.com stjohn\Svc.ADFS

Load the certificates MMC for local computer account and install a certificate which can be used for the ADFS web site. In the IIS manager configure a new binding on the default website for SSL with the appropriate FQDN and select the cert you just installed.

Make sure the ADFS server has access to all LDAP servers for all domains. Something to consider if you’ve got a few firewalls here and there.

Add your service account to the local admins group on the ADFS server and to domain admins group for the domain that the service account belongs to. Don’t panic this will only be temporary! This just allows the service account to create the necessary config for ADFS in the Program Data\ADFS OU. Once created it will have the correct permissions for the service account. I had to do this to get it work, not sure why it’s any different to a normal single forest install.

Log on to the ADFS server with the service account. Skip this step at your peril!

That’s it. Load the ADFS console and configure ADFS as you would in any other scenario

Notes

During the install you might get a yellow warning about not being able to set the SPN. That’s cool we already did it above.

Make sure you can view the federation data for your new server e.g. https://federate.internal.com/FederationMetadata/2007-06/FederationMetadata.xml

If you get a certificate error from your service provider. E.g. This typical error from SalesForce:Signature or certificate problems
Is the response signed? False
The signature in the assertion is not valid
Is the correct certificate supplied in the keyinfo? False
No valid certificate specified in this response.
Try re-generating your token signing certificate using the following PowerShell commands. Note:This will break any existing trust relationships you have with any service providers. You will have to export the new cert and update your service providers with it.

Intro

In my last post I showed you my RG6 coax patch panel build which included cabling up the 4 outputs of my LNB. Well there was a reason I risked life and limb on the roof in high winds.

TBS Technology of Shenzhen China have only been making TV cards for about 5 years but they’re starting to build a good reputation amongst HTPC enthusiasts, and for good reason – as you’ll see the 6984 is a solid performer. This review will focus on using the TBS 6984 with MediaPortal TV Server.

The TBS6984 really is the grand-daddy of DVB-S cards! It’s a DVB-S/S2 PCI Express card with 4 tuners allowing you capture from 4 different satellite transponders simultaneously. At $249USD the price is right. That’s about $62 per tuner – considerably cheaper than buying 4 separate DVB-S2 cards and much more convenient.

I’m not going to pretend that I could explain all the technical aspects of these specs, but suffice to say this card will handle pretty much anything you can throw at it.

What’s In The Box

The PCIe Card

A driver mini-CD

Infrared Remote Control

Infrared Receiver Cable

Power Cable

The build quality of the card is excellent – all the soldering looks clean and solid, and the components are well aligned. The bracket has labels for the tuners ‘A’ through ‘D’ stamped on it – a nice touch. The chipset consists of the following:

The remote is pretty basic; but let’s be honest, if you’re looking for a quad tuner card you’re probably an HTPC enthusiast in which case you’ll already have an advanced remote. If not you’ll need to get one! It has TV and navigation buttons, but lacks buttons for the advanced features you’d typically find in media centre packages like MediaPortal. That said, it will do just fine to get you up and running with basic TV software.

As for the driver CD, I haven’t even put it in my PC. Personally I never use the driver CDs which come with any hardware. I prefer to go straight to the web and download the very latest version. It would be nice if TBS released a white-box version of this product, which included just the card and the power cable.

TBS state that the additional power cable is only required when you need extra current for driving things like dish positioning motors and some LNBs. In general, you shouldn’t require it which is good – the less cables floating around the better when it comes to an HTPC that you’re trying to keep cool with a minimum of fans.

Installation

I’d love to go into great detail about the installation but there really isn’t much to say. The hardware side is obvious – unplug your PC and install the card into a spare PCIe 1x slot.

As for the driver, TBS keeps it simple which I really appreciate. You don’t have to run an installer (although there is one). You can simply let Windows detect the card and then tell it where to find the latest driver files and the device installs without any fuss. I wish more manufacturers would take this simple clean approach. With an installer, you don’t really know what you’re getting and what’s being changed on your system. I’m running Windows 7 x64. Once the driver is installed, you’ll see a single “TBS 6984 Quad DVBS/S2 BDA Tuners” device listed in device manager. The driver is a BDA driver, which means it conforms to Microsoft’s broadcast driver architecture so the card will be compatible with any TV software which supports BDA devices.

MediaPortal

MediaPortal is a free and open source media centre package for Windows.

MediaPortal TV Guide

MediaPortal Home

You can get a wealth of information and support at the Team-MediaPortal site, but these are the basic components that you’ll need to get TV up and running with the TBS 6984. The remainder of this review will focus on the TV Server component of MediaPortal.

MediaPortal – this is the main front-end application. You can have this installed on as many PCs around the house as you like

TV Server – this is a Windows service which manages all TV streaming and recording. It can be on the same or a different PC to the MediaPortal application

TV Client Plug-in – this is a plug-in component to MediaPortal which connects it to the TV Server

Once you’ve got the driver installed TV Server will detect the card. You’ll have to restart the TV Service and TV Server configuration tool if they are already running.

One thing I really like about this card is the way it identifies itself. In the Windows device manager it just shows a single device, but once you open up the TV Server configuration tool you’ll see all 4 tuners and they’re actually labelled A, B, C and D – unlike some other dual cards I’ve seen, which just show 2 identical tuners so you can’t tell which one is which.

Scanning speed is impressive – just over 6 minutes to scan 41 transponders. Both DVB-S and DVB-S2 channels are found correctly.

Now the part you’ve been waiting for – recording 4 channels at once. In fact, with MediaPortal TV Server you can record even more than that because it allows you to record all the channels on a given transponder at the same time. The TBS 6984 can tune into 4 separate transponders, so if each of those transponders carries 6 channels that would mean you could record 24 channels simultaneously! Below you can see I’m receiving 12 channels quite happily and the 6984 doesn’t skip a beat! “Just try that Windows 7 Media Center!”

The driver seems to report the signal quality and strength much more accurately than a lot of other cards I’ve seen, and also updates these quite frequently – which is great.

Channel Change Speed

The most common question I hear when discussing various TV cards with HTPC enthusiasts is “How fast can it change channels?”. There are a number of things that can affect this – system hardware, TV card, TV card driver, TV software, media codecs, etc. It also depends on where you take the measurement. The following results are taken from the TV Server logs, and indicate the time it takes for the TV card to switch channels.

Very impressive, with all tests sub-second, except DVB-S to DVB-S2 switching which takes a little longer.

DiSEqC

The TBS 6984 supports DiSEqC 2.x. MediaPortal TV Server doesn’t yet support DiSEqC for this card, but I’ve spoken to a member of the MediaPortal development team who has informed me that they will be adding it soon, and has asked me to be a tester when the time comes.

Conclusion

All-in-all, my only criticism is that such a high-end card should be matched with a high-end remote. I think the best solution is a white box version of the product so the user can choose their own remote.

The channel change speed tests speak for themselves; that, combined with the solid driver and excellent build quality, makes the 6984 an excellent choice for anyone looking to build or expand an HTPC. In fact, unless you’re certain you won’t need more than 2 tuners, I would say just go straight for the 6984 because you’ll end up saving money in the long run.

As for TBS support – while I haven’t needed any technical support, from what I’ve read elsewhere they seem to have a reputation of being very responsive as well as being happy to interact with the MediaPortal development team. They also make their SDK (software development kit) freely available.

Here’s a quick update on my structured cable at home. Hopefully it will give you some ideas if you’re looking to do something similar. The main goal here was to run all 4 LNB outputs from my dish and my UFH antenna back to a single point.

RG6 quad-shield run up from the floor through the existing data cable channel.

I used a piece of powder coated aluminum which I cut from a 2U server rack blanking panel and ran the RG6 cables through the wall and terminated them with ‘F’ joiners.

I came across a great free 2D CAD application called DraftSight which I used to create a template for drilling the plate and the wall.

Completed and all back together – featuring the aptly named ‘Patch’. The cables connected to the completed patch plate all go back up the channel to the TV Server PC in the cupboard above. I haven’t cabled any of the rooms (except the lounge) because everything is delivered over IP, however I have pre-drilled at the back of the plate and half drilled the plate for future expansion.

I’d like to thank Godfey who supplied all the RG6, F connectors, and tools. And also took the time to show me how to do PPC compression fittings. Kiwi’s – if you’re in need to any of the gear to do this stuff Godfrey trades through TradeMe and gives the best service and prices around!

Ok this is one of those “if you need it, you’ll know what I’m talking about posts”!

I recently started using oscam and being that I don’t like sever applications that need to run in the foreground I wrote a small windows service wrapper to handle oscam for me.

Just drop it in your oscam folder and install it by running oscamSVC -install

Using oscam as a system service also overcomes the issue that some people have found with some USB card readers such as the Omnikey disconnecting or ejcecting when a remote desktop connection is made to the PC running oscam.

Version History & Changes

0.3.5.0 – Current version

Added settings form (Loaded by starting oscamSVC.exe without any parameter)

Added start-up delay option
Use this option if you see card detection errors in your oscam log. I did – even though I made oscamSVC dependent on the smartcard service.
Adding the delay just lets Windows start a bit more which for what ever reason seems to help. It will be dependent on your reader.
During the delay the service is kept in the ‘starting’ state so you can still add service dependencies using regedit.
e.g. MediaPortal TVServer depends on oscamSVC.

Project boxes available off the shelf always seem to be just too smahttps://blog.rhysgoodwin.com/wp-admin/post.php?post=973&action=edit&message=1ll or way too big! This is especially true here in New Zealand where the options between JayCar and SurplusTronics are fairly limited. I needed a specific size to house a project I’m working on so I decided to cast my own in Polyester Resin. I hope the details which follow will prove helpful.

Draw up a design – I used Google Sketch up. Make a box of which the inner dimensions represent the outer dimensions of your final enclosure. I used blocks of pre-dressed pine. You’ll want to use something reasonably solid and screw it down to a base board so you get nice square vertical sides. For a base I used melamine board – it’s nice and smooth and the resin won’t bond to it. Your local kitchen builder will give you off cuts for free if they’re nice.

Wrap the blocks with masking tape. This provides three benefits:

The waxy surface of the tape acts as a barrier between the wood the resin and makes de-molding easier

It creates a nice flat surface to mold against

Reduces the chance the resin leaking out because it forms a seal as the blocks are pressed together then screwed down

Put a mark on the side of at least one of the walls to indicate the height of the enclosure, this is where you will pour the resin up to.

Accuracy when cutting the wood is important if you want a professional looking result. Decide on a tolerance and stick to it. If you cut a length and it’s not within tolerance then re-do it. If you don’t, you’re sure to be disappointed with the end result. Resist the “She’ll be right” temptation – it won’t be right! Errors are amplified at each stage of the process. I worked to 0.5mm. For me this was an excellent practice exercise in hand-saw and measuring accuracy.

Make a shape of which the outer dimensions will represent the inner dimensions of your finished enclosure. This is where you decide on the thickness of the enclosure walls. I made over-sized corners so that I had solid pillars to screw into.

Unless you’ve got a dead level work bench you’ll probably need to set up a little platform that you can level off with screws – like this:

Mix up some resin and pour it into the mold up to the height you marked. Mix the resin and MEKP as per the instructions. I mixed towards the higher end of the 1%-2% ratio, about 1.7%. And please be careful with the MEKP. Don’t even think of going near it without eye protection and gloves. MEKP is a severe skin irritant and can cause progressive corrosive damage or blindness.

Now the fun bit. After a few hours the resin will be hard enough to remove it from the mold. Unscrew and remove the inner blocks and as many outer blocks as you need to get the enclosure out.

It will be a bit hard to get out because the resin shrinks a little bit as it cures. It will probably also still be a bit sticky when it comes out.

Now on to the top and bottom sections. You could just cut some flat plexi-glass for this but while we’re at it we might as well just cast them. Clean up the enclosure with sand paper. I used wet/dry from 120 grit up to 400.

Drill and tap the corners.

Use making tape to mask off the areas where the resin will touch. Screw in 4 countersunk lid screws leaving them out to the height you want the thickness of top lid to be (I made mine 4mm). Make sure they are all exactly the same height.

Reassemble the outer box of the mold on a new piece of melamine and place the enclosure back in, screws down. The enclosure will have shrunk since it was last in the mold so use multiple layers of masking tape as packers to center it (use an even number of layers on each opposing side).

Make sure your platform is dead level and pour the resin into the mold so it just comes up over the sides of the enclosure. This will make a locking lip on the lid.

After a few hours you can de-mold the box with lid attached. It will be stuck to the board but just slowly ease it away, you almost have to peel it up. Don’t try to knock or tap it. With a bit of luck you’ll be able to remove the screws from lid and take the lid off without too much difficulty. Making the lid this way takes care of the screw holes and countersinking.

Now repeat the process for bottom lid. For the bottom I didn’t mask it off because I wanted it sealed on – I don’t need to remove it. I also made it a bit thicker so it’s nice and solid for mounting to.

Once you’re all done you can sand/polish the enclosure as much or as little as you like. I left it with a frosted look but you could shine it up to be completely transparent if you wanted. I also removed the thread from the holes in the lid.

Well that’s quite a process! But the result is good and it’s good practice for accuracy, woodwork and resin casting. I tried a number of methods before I came up this and it’s by no means perfected – as always I’m keen to hear your ideas.

I’ve got DL360 G5 running VMWare ESX 4 and I wanted to update the iLO firmware to the latest version. Even though iLO has a firmware update page where you can upload a new firmware image file. This doesn’t seem to be available for download at the HP iLO2 support page. To get it you need to download the Windows firmware update tool and extract the package using 7-zip.

Remote Console – Cursor Keys Don’t Work with IE8

To get around this, disable protected mode in IE or run it IE as administrator (Windows 7, vista etc).

If you’ve read my last few posts you’ll be aware that I’m in the middle of implementing ADFS 2.0 for Web SSO. SalesForce for starters, with more to follow. I’m yet to put it into production but I was thinking today and just having a bit of a sanity check and something occurred to me. We send LDAP attributes as claims, the attributes are accepted by our service provider as law. They trust our federation service – that’s what federation is all about. Trust. There are number of mechanisms that make it very difficult for someone to spoof an assertion. On the whole, the SAML protocol can be considered very secure. What it can’t do is guarantee the validity of the source LDAP attribute.

Consider the scenario above. We’re going to send the User’s telephone number as a claim. Maybe unlikely but it could happen, maybe you’ve got a SaS provider and you’ve already got 500 users in the system and telephone number is the only field you know is accurate between you and them. Unlikely? I know. But that’s not the point.

The issue is this – in Active Directory the attribute telephoneNumber, along with a few other attributes is by default, self writeable.

Once Dave figure’s out that the telephone number is significant he’ll update his phone number in AD to Bob’s phone number, launch the SaS app and will be logged in as Bob.

While there are only a few self writeable attributes in AD and they’re not ones you’d likely use for federation, it’s important to keep the whole picture in mind and the problem could go beyond self writeable attributes. A couple of other situations I can think of off the top of my head:

Identity management systems which synchronise other systems to Active Directory. Not a problem in itself but you might be moving the point authorization for a specific application without realising it.

So choose your attributes wisely and make sure you know how, why, when and by whom or what they are written to before you decide to send them as federation claims.

-14/01/2013 – Workaround for iPads and SSO!
Rohan writes to tell us that:

“If you activate Touch in your Org and set touch to be used for Browser access then an iPad will via Safari use the mobile touch interface. You’ll need to have http redirect set, and (As below) you are limited to the touch “app” so no chatter app or dashboards app.
That said, dashboards are meant to be coming to touch shortly so perhaps touch via the browser is enough.”

Cheers Rohan!

– 15/10/2012 – ADFS 2.0 / SalesForce + iPad/Safari Working!SalesForce have added a new option to change the SAML binding method from HTTP POST to HTTP REDIRECT. Using HTTP REDIRECT seems to fix the issue with ipad/Safari and ADFS 2.0. You can find this in the SSO settings.

Intro

In my last post I went over the basic concept of federation using SAML 2.0, today I’ll show you how to configure single sign-on for SalesForce using ADFS 2.0. This is a really nice solution because it’s easy to set up and doesn’t cost you anything except the Windows 2008 OS licence.

ADFS 2.0 is Microsoft’s answer to federation – it includes their own implementation of SAML 2.0. It runs on Windows Server 2008 [R2] and is installed from a separate downloadable package. It is not the ADFS ‘role’ which can be enabled in Windows Server 2008 R2, that’s ADFS 1.0 (not cool).

If you don’t feel you have a good grasp of SAML 2.0 I suggest that you set up ADFS 2.0 (as IdP) and Shibboleth (as SP) in a lab environment. There’s no better way to learn about a particular technology than to interface Microsoft’s implementation with its open source counterpart! There’s a great MSDN blog post that walks you through the set up. I really learnt a lot by doing this.

Overview

Let’s first take a look at an overview of the process then we’ll dive into the configuration. The diagram below shows the process for an IdP-initiated login into SalesForce – later we’ll look at SP-initiated login.

The user authenticates to the ADFS server using Kerberos and requests login to SalesForce

ADFS returns a SAML assertion to the user’s browser

The browser automatically submits the assertion to SalesForce who logs the user in

Install

Start with Windows Server 2008 [R2] – Domain Joined

Create a friendly DNS name for ADFS and point it to your adfs server. e.g adfs.testzone.local

Download and install ADFS 2.0. Federation Server role. This will install all pre-requisites

In the IIS manager create an SSL certificate for your friendly DNS name or use SelfSSL from the IIS 6.0 resource kit to create a self-signed certificate

Run through the ADFS Server configuration wizard

Create a new federation Service

Stand-alone server

Select the certificate that you created for your friendly DNS name

Create an SPN for the DNS name so that Kerberos authentication between the browser and the ADFS IIS instance works correctly

Configuration

To build a federation between two parties we need to establish a trust by exchanging some metadata. The metadata for our ADFS 2.0 instance is entered manually into the SalesForce configuration. SalesForce metadata is downloaded as an XML file which ADFS 2.0 can consume.

SalesForce Configuration

In the ADFS 2.0 MMC snap-in select the certificates node and double click the token-signing certificate to view it.

Go to details and “Copy to File”. Save the certificate in DER format.

On the ADFS server browse to your federation metadata URL which can be found in the ADFS MMC\Endpoints|Metadata|Type:Federation Metadata. In my case: https://adfs.testzone.local/FederationMetadata/2007-06/FederationMetadata.xml

Identity Provider Login URL:This is the URL of your ADFS SAML endpoint where SalesForce will send SAML requests for SP-initiated login. This can be found inADFS MMC\Endpoints|Token Issuance|Type:SAML 2.0/WS-Federation(In my case: https://adfs.testzone.local/adfs/ls/)

SAML User ID Type:To log a user on we can either match against their SalesForce username or we can match against their federation ID which would need to be populated in the profile of every user. For testing select federation ID. If your users currently use their email address as their SalesForce username then when you come to roll out SSO into production you can switch to sending the username.

SAML User ID Location:To log the user on we can either use the NameID in the SAML assertion or we can use some other attribute. NameID should suffice.

Entity ID: This is how our ADFS IdP will identify the SalesForce SP. I just left it as https://saml.salesforce.com. If you were supporting multiple SalesForce instances from the same ADFS instance then you’d want to use the more unique name. This is also the identifier we use when we do a IdP-initiated login with ADFS

Save the settings and download the Metadata xml file.

ADFS 2.0 Configuration

Now that we have the metadata for SalesForce we can create the trust on the ADFS side.

Open the ADFS 2.0 MMC snapin and add a new “Relying Party Trust”:

Select Data Source: Import data about a relying party from a file. Browse to the XML you downloaded from SalesForce

Display Name:Give the trust a display name e.g. ‘SalesForce Sandbox’

Choose Issuance Authorization Rules: Permit all users to access this relying party

Open Edit Claim Rues Dialog:Ticked

In the claim rules editor select the “Issuance Transform Rules” tab

Add a new rule:

Claim Rule Template: Send LDAP Attributes as Claims

Claim Rule Name: For testing we’ll send the UPN as NameID so call the rule: “Send UPN as NameID” In production you might send the user’s email address or employee ID *

With IdP-initiated login you would typically set up a link on the company intranet that users would click to get access to SalesForce. SP-initiated login happens when a user clicks a direct link to SalesForce. For this to work we need to set the secure hash algorithm to SHA1 instead of the default SHA-256. This is set in SalesForce relying party trust properties under advanced.

If you don’t set this you’ll get the following message in to the ADFS event log:

Event ID: 378

SAML request is not signed with expected signature algorithm. SAML request is signed with signature algorithm http://www.w3.org/2001/04/xmldsig-more#rsa-sha256 . Expected signature algorithm is http://www.w3.org/2000/09/xmldsig#rsa-sha1

With My Domain

It’s best practice to implement the “My Domain” feature at the same time as implementing SSO if you haven’t already done so. “My Domain” gives you your own subdomain on SalesForce. e.g. MyCompany.my.salesforce.com. When you click a “My Domain” link SalesForce will know to redirect you to your Idp (ADFS) to be authenticated.

Without My Domain

As long as the user has performed at least one IdP-initated login from a given browser SalesForce will have set a cookie so in future it knows to redirect the browser to the IdP with a SAML request. The IdP will in turn issue a SAML response for the browser to pass back to SalesForce.

You might find some SalesForce documentation that mentions the ssoStartPage attribute which can be set in the SAML assertion. I found that this wasn’t necessary, and if you look at the cookie you get after an IdP-initiated login you’ll see that ssoStartPage is set to the IdP login URL you specified in the SalesForce SSO configuration.

LogoutURL

You can specify a URL to redirect to when the user logs out by creating a custom claim rule which sends an additional logoutURL attribute.

This should redirect you and sign you into SalesForce. If you get a SalesForce login error use the SAML assertion validator tool on the SalesForce single sign-on configuration page. It will display the results of the last failed SAML login.

If you get an error from ADFS then check the ADFS logs in Server Manager\Diagnostics\Applications and Services Logs\ADFS 2.0\Admin. There is also a very good MSDN blog post on ADFS 2.0 diagnostics.

Once you have IdP-initiated login working try SP-initiated. Copy a link from deep inside SalesForce then log out. Reload your browser and paste the in the URL. You should be seamlessly redirected to your IdP, authenticated and then redirected back to the link you requested.

Portal SSO and JIT Provisioning

There’s plenty of good info in the Force.com literature for portal SSO and just-in-time provisioning but here’s some ADFS 2.0 specific stuff. I’ve only been playing with this for a couple of weeks but I’ve had requests for the info so here’s what I’ve got so far. Let me know if I’ve missed something or I’ve got something wrong.

Portal SSO

To do a Portal SSO login you need to send the portal ID and the Org ID as claims using custom claims rules. There is significant caveat though, if you’re using ADFS to do SSO for both full-License and portal users. If you send the Portal ID and the Org ID for a full-license user SalesForce will assume you are trying to log into a portal. This will result in a SAML error because a full-license user can’t also be a portal user. To overcome this you can use a condition in the custom rule so the portal ID and Org ID are only sent if the user is a member of a given Active Directory group. The rules below use the AD group SID which you can find using pstool psgetsid.exe. The advantage of using an SID is that if the group is renamed the SID stays the same.

I haven’t found a way to send send a claim only if a user is not a member of a group, and the ADFS 2.0 claims rules language reference is non-existent! If someone works this out please let me know.

JIT Provisioning

Just-in-time provisioning allows you to create users on the fly with a SAML assertion as they attempt to login. All you need to do is enable JIT in your SSO settings and then send the required attributes. JIT custom claims rules are just like the portal ones above. The SalesForce Single Sign-On Implementation Guide has all the details on what attrbiutes to need to be sent. Here’s a few things to keep in mind:

You can provision and update users into specific profiles or rolls based on Active Directory groups just like with the portal logins above, but you’ll need to manage the AD groups carefully – you couldn’t have a user in multiple groups which represent different SalesForce profiles or rolls

You can provision and update users but you can’t un-provision them. Again you’ll need to use Active directory security groups to determine who can be JIT provisioned and you’ll have to manage your licensees and account De-activations outside JIT

I found that I couldn’t JIT provision users if chatter wasn’t enabled – still waiting for the word on this from SalesForce

Further Analysis

In case you’re wondering how the browser collects and passes these SAML requests and responses around, we’ll take a closer look at the entire process.

We’ll go over the SP-Initiated login because it has the most steps and really demonstrates SAML and federation at its best. I’ll use screen shots of Fiddler2 to show you exactly what’s happening at each step.

Note* Fiddler messes with Windows integrated authentication to IIS so you’ll need to turn off extended protection on the /adfs/ls/ virtual directory if you want to try this. Otherwise your browser won’t authenticate with ADFS and you’ll see event 4625 with error 0xc000035b in the Windows security log on the ADFS server.

Step 1

The user clicks a direct link to a SalesForce page. The browser connects and SalesForce reads the ssoStartPage attribute from the user’s cookie. SalesForce uses JavaScript to redirect the browser to the SalesForce SAML request generator. The SAML request generator creates a SAML request for the IdP by sending an invisible HTML form with hidden fields back to the browser. It then uses JavaScript to automatically submit the form to the IdP SAML endpoint.

Step 2

The browser submits the HTML form which contains the SAML request to the ADFS SAML endpoint. Since we are using Windows integrated authentication, ADFS redirects the browser to the /auth/ingetrated/ directory at which point a 401 (User must authenticate) is sent. Finally, the user is authenticated using Kerberos and ADFS serves up a SAML response. Again, the SAML message is returned to the browser in an HTML form which is then submitted to the SalesForce SAML endpoint using JavaScript.

Step 3

The browser submits the HTML form which contains the SAML response to the SalesForce SAML endpoint which verifies the SAML assertion, logs the user in and redirects the browser to the original requested URL.

Common Issues & Troubleshooting

Here are some of the issues you might com across. Thanks to everyone who has commented and shared their experience – I’ll keep updating this section.

Federation ID is case sensitive
One thing to watch out for is the Federation ID is case-sensitive. So if this is your organizational email, be sure to enter it exactly as AD FS sends it, or Salesforce won’t be able to find the matching user.

I’ve looked into writing a custom claim rule to normalize the case the lase of the LDAP attribute before sending it but it looks like it’s not possible the claims language doesn’t seem to have any string manipulation except a basic regex replace.

Assertion Expired
An assertion’s timestamp is more than five minutes old.
Note: Salesforce does make an allowance of three minutes for clock skew. This means, in practice, that an
assertion can be as much as eight minutes passed the timestamp time, or three minutes before it. This amount
of time may be less if the assertion’s validity period is less than five minutes.

So make sure your clock as sync’d to a good internet time source.

Preventing Users Using Their old Username/Password

It doesn’t seem to be possible to prevent users from logging in using the standard method which is a bit of a pain. There is an idea you can promote here to get this feature implemented:

Conclusion

Well that’s it! Everything you need to know about SalesForce WebSSO with ADFS 2.0.

Federation is really cool, so make sure you encourage its use in your organisation instead of older methods involving clunky tightly coupled links and horrible things like allowing your cloud vendor to do LDAP authentication against your domain controllers over VPNs etc!

Thanks for reading and please ask questions, make comments and corrections. I’ll continue to update this post as we go.

Updates

2011/07/30

Added JIT and Portal SSO info

Updated SP-initiated login with “My Domain” info – Thanks to Pat over at SalesForce for helping out this this

This week I found myself on a SalesForce/SAML/Federation journey which turned out to be very enlightening. Until a few days ago I really had no idea how SAML or Federation worked and it took me a few hours to get my head around it, so I’m going to try explain SAML in a way that’s easy to understand.

SAML 2.0 (Security Assertion Markup Language).

2 Companies:
Company A (Service Provider – SP) has a web application
Company B (Identity Provider – IdP) has a database of people who need to access Company A’s application

We have a few options here:

Company A could create a new database of people with usernames and passwords within the web application

We could synchronise the database of people including their usernames and passwords from Company B to Company A

We could make a link from the web application to Company B’s database of people and do lookups in real-time

We could tell the web application at Company A to trust users who come from Company B

Options 1 through 3 are pretty crappy. Option 4 is called federation and it’s cool.

Here’s what happens (part analogy, part reality):

Both companies have a pair of keys. A public key and a private key. Once something is locked with the private key only the corresponding public key can open it. Company A has a copy of Company B’s public key and vice versa.

A user in Company B tries to access the web application at Company A. The web application looks for a cookie in the user’s browser to see if he is already authenticated, he is not so the web application (SP) redirects the browser to Company B’s IdP, telling him – “Go and get a ticket!”

The browser goes to Company B’s IdP who authenticates the user against Company B’s database of users. The IdP at Company B locks the user’s employee ID with his private key, gives it to the browser and tells him – “Here’s your ticket now go back to the SP you came from!”

The browser goes back to the web application (SP) at Company A and presents his ticket. The SP uses Company B’s public key to unlock the ticket. The web application says to himself – “It works! This user MUST have come from Company B because otherwise this public key could NOT have unlocked this ticket. And look, the ticket contains an employee ID and I have a rule that says that this employee ID is allowed access!” And so the web application gives the browser a cookie which allows him access.

In SAML the ticket is called an Assertion. In this case we sent the Employee ID but any other user-unique attribute could be used, it just needs to be agreed between the 2 parties.

In reality the web application might not support SAML directly but instead maybe protected by a federation product which takes care of the SAML SP stuff. The IdP stuff will also likely be handled by a federation product which is backed by some kind of LDAP directory or maybe SQL. The browser cookie stuff mentioned above is outside the scope of SAML but I included it for completeness – It’s typical of how these things work.

The neat thing about federation is that you don’t need any links between Company A and Company B. Once the trust is established everything else takes place “browser to SP” and “browser to IdP” through a series or re-directs and http POSTs.

Ok so that’s SAML. Actually there are a lot more parts to it than that but that’s the way it’s most commonly used today i.e. for WebSSO. Now that you have the concept, you can dig into the technical details.

The best way to learn this stuff is to give it a go. Check out Shibboleth which is an open source SAML SP and IdP implementation. I’ve got the Shibboleth SP side talking to ADFS 2.0 as the IdP but I haven’t played with the Shibboleth IdP yet.

Next time I’ll show you how to put SAML to use with Active Directory Federation Services 2.0 and cloud provider SalesForce. In the mean time feel free to ask questions or make corrections.

We recently started having issues with our VMWare / HP Lefthand iSCSI SAN environment. The symptoms were as follows:

VMs would sometime freeze-up for up to 10 seconds – no ping, nothing! Really nice on a busy SQL server running finance apps! Yeah! The problem affected VMs on both the Lefthand iSCSI and the fibre channel EVA

Taking snapshots of VMs on the Lefthand storage would almost always fail and in most cases make a mess of disk chaining which would require manual clean up

Browsing datastores is extremely slow

General flakiness across the VI environment (Yes, that is a technical term)

I stated out by looking in the vmkernal logs of the ESX hosts and found errors like this occurring fairly regularly.

These errors were in relation to LUNs on the iSCSI SAN. A quick google of “failed on physical path H:0x0 D:0x2 P:0x0 Valid sense data: 0x9 0x4 0x2 Lefthand” quickly turned up this VMWare KB article which states that this is a LUN locking error caused by having VMFS LUNS presented to a Windows host which has the HP Lefthand DSM (Device Specific Module) for MPIO installed. This immediately rang a bell with me because we had recently installed a new backup server including full iSCSI MPIO support using the HP DSM.

Presenting the LUNs to the backup server allows VMs to be backed-up directly from the LUN as opposed to backing up via one of the ESX hosts. A good idea as long as you read the HP documentation all the way to the end and don’t install the DSM for MPIO!

Great! I thought, I’ve found the problem. It appears the the LUN is being locked by the DSM and is causing the host to “timeout” affecting the entire storage subsystem (iSCSI and Fibre Channel). I went ahead and un-presented the iSCSI VMFS LUNs from the Windows host fully expecting the issues to clear up. Unfortunately this didn’t happen. My next step was to vMotion all the VMs off one host and reboot it. Still no luck, the errors returned to the vmkernal logs within a few minutes of the reboot.

At this point I logged a case with HP who provide our VMWare (and of course Lefthand) support. After they analysed the logs, they felt that the only way to resolve the issue was to do a full shutdown of the all the hosts and all the Lefthand storage! Classic support call – “Have you tried turning it off and back on?” But seriously, the guy at HP was very knowledgeable and helpful. We proved the approach as follows:

Create a new LUN on Lefthand and present it to all ESX hosts

Put a VM on the new LUN and prove that there are no issuers associated with the LUN by repeatedly taking snapshots and monitoring the vmkernal log

Present the LUN to the Windows backup host with the MPIO DSM. – Now the errors start occurring with this new LUN.

Un-present the LUN from ALL hosts (ESX and Windows)

Reboot one of the ESX hosts and re-present the new LUN to it. – The errors are no longer occurring with this LUN

It appears that access to a LUN from all hosts must be stopped to clear the locking so we did a fair amount of planning and undertook a full shutdown as follows:

Uninstall HP Lefthand DSM for MPIO from Windows hosts (We still want to try to present the VMFS LUNs back to the backup server at some stage)

Shutdown all VMs

Shutdown all the ESX hosts

Shutdown Lefthand (Shut down the management group, not the nodes individually)

Power up the Lefthand and make sure all the nodes are up and volumes are all online

Power up the ESX hosts and VMs

After doing this all LUN locking errors are gone from the logs. Everything seems very solid, snapshots are working and the flakiness is gone!

Any comments from anyone who has an understanding of the inner workings of iSCSI, lefthand, VMWare SCSI reservations/locking etc who can shed some light on what’s actually happening here would be much appreciated! Or just if you’ve had a similar experience I’d be keen to hear.

Installed Validity Fingerprint Sensor Driver v.4.0.15.0Result: Fingerprint options are visible in ProtectTools and fingerprint enrolment is successful but no option at windows logon to unlock with fingerprint

Un-installed ProtectTools v.5.1.1.744

Installed ProtectTools v.5.0.4.669 (The version I have on my other Elitebook which works)Result: Windows logon now displays the swipe to logon option but when you swipe it throws an error: “the system cannot find the specified file“.

Installed ProtectTools v.5.1.1.744 again over the top of v.5.0.4.669 which upgrades v.5.0.4.669 to v.5.1.1.744Result:Everything is working as it should!

I know it’s not meant as a definitive technical guide but I had a good laugh when I came across this flow chart in hp’s LeftHand SAN / VMWare vSphere 4 guide.

Or in engineering speak: “Tighten it up ’til it breaks then back it off half a turn!”

Sorry if you dropped by with a legitimate question on LUN management! Actually the question in the chart about snapshots and remote copy is very valid and the very first thing you must consider when designing your LUN layout.

It’s been a long time since my last post! I’ve been so busy working on the house (but nothing really blog-worthy). Anyway today a colleague and I went through and set up the Citrix Web Interface (5.x) with single-sign-on using Microsoft ISA 2006.

The Web Interface and Secure Gateway run on the same server but are configured completely independently of each other, they could just as well be on separate servers if the load warranted it. They both listen on port 443 on separate IP addresses with separate certificates.

On the face of it, it seems quite straight forward – configure the Web Interface for pass-through authentication, create an ISA web publishing rule using our common SSO web listener with forms based authentication and configure an authentication delegation method. This works just fine as far as getting the user logged in with their list of applications.

Next step – configure the CSG to listen on a separate IP address with a separate certificate and configure a NAT rule so the ICA client can connect directly to the CSG. Again fairly straight forward.

Here’s the catch. Using pass-through on the web interface doesn’t work with the CSG. Pass-through mode expects the client to be domain-joined, inside the corporate network and able to authenticate directly with the XenApp Server (as opposed to being pre-authenticated by the XML/STA services). The result with the above configuration is that when the user launches an application they are presented with a Windows login dialog which defeats the purpose of single-sign-on.

The solution – ASP.Net “jump” page on the web interface.

Configure the Web Interface in “Explicit ” mode rather than pass-through. This is the standard method where the user is presented with the Citrix Web Interface login form.

Create an ASP.NET jump page which extracts the user-name and password from the HTTP request, and creates a form with hidden fields then uses java script to POST the form to the Web Interface login page.

This all happens instantly without the user noticing. Don’t configure the Web Interface as the default IIS page, instead place the jump page in the root of the IIS web site and set the document priority to to serve it up first. Here’s the code: (Download link at the end of the post)

AuthPass.aspx.cs – Note: The domain field needs to be set to your own domain or removed completely depending on how your users login. The form action needs to point to your Web Interface login.aspx page.

Web publishing rule with SSO WebListener using forms based Authentication

“Basic” authentication delegation (SSL end to end!)

Published logoff URL is set to /Citrix/XenApp/site/logout.aspx

Simple NAT rule for CSG

Web Interface

A new XenApp site is created in the Web Interface Management tool with “At Web Interface” configured for the “Where user authentication takes place” setting

Authentication Method set to Explicit

AuthPass.aspx[.cs] files are placed in the root of the IIS website to handle auto-login

XenApp web site is not configured as the default IIS site. AuthPass.aspx is set as the default page on the IIS web site

Secure access mode is set to “Gateway Direct” but this will depend on your environment

CSG

This CSG is entirely configure using the CSG Management Console

A specific certificate for the CSG is selected

The CSG is set to listen on specific IP address rather than the default of all IPv4 addresses (an additional address must be added to the server’s TCP config).

“Direct” mode is configured for the Web Interface location

There you have it. Citrix WI/CSG SSO for ISA. I know it’s a bit of a hack but I spent some time trying to find a way to do this natively with Citrix Web Interface configuration and posted in the Citrix support forums without any success. If there is a more official way to do it I’d love to hear about it.

On a side note…

I found a long delay during login which was fixed by disabling NetBIOS over TCP/IP on the web interface server

I would be out in the sun if there was any! But since it’s raining I’m doing a bit of WordPress development for a project I’m working on.

The wp_signon() function logs a user in but for some reason after doing so the global user variables such as $current_user and $user_ID are not populated until the page is refreshed and calling get_currentuserinfo() doesn’t populate them. The is_user_logged_in() function also returns false. I think the problem has something to do with the cookie authentication process.

There is however a solution to the problem and it doesn’t involve refreshing the page. When you call wp_signon() it returns a user object if the sign-on was successful, so we do have all the user info we need, it’s just not available in the standard global way. To fix this we just need to call the wp_set_current_user()specifying the id of user object which wp_signon() returned.

To demonstrate, here’s an example of creating a user, logging them in and submitting a post all in one page refresh.

<?php
_my_newuser('joe','itsasecret','joe@joe.com','Joe');
_my_user_login('joe','itsasecret');
_my_commit_post($newpost);
_my_current_user();//Register a new userfunction _my_newuser($username,$password,$email,$nickname){global$wpdb;$user_login=$wpdb->escape($username);$user_pass=$password;$nickname=$nickname;$user_email=$username;$userdata=compact('user_login','user_email','user_pass','first_name','nickname');return wp_insert_user($userdata);}//Log a user in and set them as current userfunction _my_user_login($username,$password){$creds=array();$creds['user_login']=$username;$creds['user_password']=$password;$creds['remember']=true;$user= wp_signon($creds,false);
wp_set_current_user($user->ID);//Here is where we update the global user variablesreturn$user;}//Commits a new post to the DBfunction _my_commit_post($postdata){global$user_ID;$new_post=array('post_title'=>$postdata->title,'post_content'=>$postdata->description,'post_status'=>'publish','post_date'=>date('Y-m-d H:i:s'),'post_author'=>$user_ID,'post_type'=>'post','post_category'=>array(0),'tags_input'=>$postdata->tags);$post_id= wp_insert_post($new_post,true);}//Print info about the user who is now logged infunction _my_current_user(){global$current_user,$user_ID;if(is_user_logged_in()){echo'<br />User Logged in ok<br />';echo'User ID is: '.$user_ID.'<br />';echo'User login is: '.$current_user->user_login.'<br />';}elseecho'No user is logged in< br/>';}?>

Here’s a quick one that might save you some time. Even after installing the Exchange auto accept agent and registering your resource mailboxes against it you still don’t get free/busy information when you go to book the resource.

If these are new resource mailboxes which haven’t yet had a booking all you need to do is create a booking and the free/busy information will start working.

Here’s a modular vbscript framework I wrote for deploying software and performing other multi-step tasks across reboots.

I won’t list the whole script in the post because it will get messy with all the wrapping. Instead I’ll just describe the main features and you can download the zip file below. The screenshot will also give you a reasonable idea about how it works. It’s a bit rough and doesn’t have robust error handling – it’s really just to give you some ideas about putting together modular vbscripts.

The main script is a .wsf (Windows script file). I use WSF because you can’t include other script files in a .vbs file. The framework gives us the following benefits:

Code reuse – We can write core functions that are available across all modules

Script logic

Get the step which we’re up to from the registry. Registry key doesn’t exist yet (Step 0)

The Select statement executes the code for step 0

Set the script to run next time windows starts

Set auto-logon so Windows logs on automatically when it boots up

Run a module e.g. InstallOffice()

Call “NextStepReboot()”. The current step is recorded in the registry and the system reboots

The system has rebooted and logged on automatically and the script starts again

Get the step which we’re up to from the registry (step 1)

The Select statement executes the code for step 1

Run another module .

If we don’t need a reboot then we’ll just call NextStep() to record the step we’re up to

The code loops back around to the select statement and we now go to step 2

Run another module function

Clear auto-logon

clear script start-up

exit

Note*

To start the script from the beginning again or from a specific step ‘the “HKLM\SOFTWARE\RGFramework\Step” registry string must be deleted or manipulated. This string represents the last step that was run so if it is set to 3 then next the script runs it will execute step 4.

Modules

A module is just a file which contains a single function. The function has the same name as the module file. Here is an example of a module which installs an application. (\modules\installMyApp\installMyApp.vbs)

While assigning a new add-on domain to Bluehost I came across this problem. When you sign up with Bluehost (and I suspect any other cPanel hoster) your account is associated with a primary domain (the first one you signed up with). If you then add additional domains to your account cPanel decides to add subdomains to your primary domain. Like this:

This is really bad for SEO because you end up with 2 different domains pointing to the same content. If you try to remove it using the cPanel Subdomains tool it throws this error:

There was a problem removing the subdomain.Sorry, the subdomain “someothersite” cannot be removed because it is
linked to the addon domain “someothersite.com”. You must first remove the addon domain.

What? why? Ok whatever. To get rid of this unwanted subdomain you need to use the “Advanced DNS Zone Editor”.

After you delete the subdomain record you’ll still see the subdomain listed in in the subdomains tool. You won’t be able to get rid of it from there, however it will be gone from the DNS system so it won’t be accessible or crawlable by Google.

I do want to make it clear that this isn’t a problem specific to Bluehost – it’s a cPanel issue. I actually really like Bluehost. If you’re looking for hosting I really recommended them. This blog is hosted with them and it just works! Unlike other hosters which I’ve found to be painfully slow.

In the CMC add the new nodes by going “Find Systems” and entering the IP addresses you assigned to the nodes

Under “Available devices” go to the TCP/IP settings of each node and create a bond so the two NICs become one and choose a load balancing type.

Go to http://webware.hp.com/ and generate your license keys. Each unit comes with an entitlement certificate. You’ll need to provide the Feature Key (MAC Address) which can be found in the CMC under “Feature Registration” for each node. When you get the key replace the one that’s in there by default.

Right click the units and add them to an existing management group (or create a new one)

Now that You have the units in the management group add them to an existing cluster (or create a new one)

Nodes must be of equal or greater capacity to existing nodes in a cluster. If they are of greater capacity then only the capacity of the smallest node in the cluster will be usable – Maybe time to create a new cluster?

This function adds a domain group to a local group but uses alternate credentials than that of the user account running the script.
This is handy when you’re doing automated deployments and you’re using auto admin login for post-install configuration.

Here’s the syntax (Note the forward slash to specify the name of the group you want to add the back slash for the credential). The function returns 0 if all is well.

Well it’s been a very busy week and I haven’t had much time for projects but I did throw together a script to duplicate my production WordPress instance so I have a place where I can test updates and changes before rolling them out to my blog. You will need shell access for this but most hosters seem to provide it these days.

First up I used my hosting control panel to create a new sub-domain and pointed it to a new directory. I also created a new blank MySQL database and a user account which had read access to my production WordPress database and full access to the new database.

I copied wp-config.php from my production blog into the scripts folder.

user@myhost.com[/]#vi /home/scripts/wp-config.php.qa

This file contains the WordPress database settings which I edited to point to the new database.

I didn’t want public access to my QA site and I certainly didn’t want Google to index it so again I used my hosting control panel to password protect the new directory. This just creates .htaccess file in the directory. I also copied this .htaccess to the scripts folder.

Now on to the script. This script “refreshes” the QA site. When the script is run everything in the QA site is wiped out and re-created as a duplicate of the production site. I used vi to create this script (refresh_QASite.sh) in my scripts directory. You can use whatever text editor your hoster provides. Set the permission on the script to 700 with chmod. This makes it executable and prevents others from reading it which is important because it contains passwords.

I’ll go over it line by line – it looks complicated but it’s really not. I’m not sure what your browser will do with wrapping and special characters so if you want to copy or paste bits you’re better to get it from the plain text version which will link at the bottom of the post.

This line clears out all the tables in the database. This is handy because in a shared hosting environment you almost certainly won’t have permission to create and delete entire databases except though the web control panel. Make sure you type the correct database name as this will make short work of your production database! Thanks to Eddie for this great tip!

rm -rf /home/publicHTML/qasite

This wipes out the current QA Site directory. Again be careful to get the right path!

Because we wiped out the QA site directory and coped over it with the production files our QA site still has the production site wp-config.php which means it is pointing to the production database so we copy (and overwrite) the the wp-config.php with the one we prepared earlier in our scripts folder. We also copy over the .htaccess file which secures the QA site.

Last of all I execute a “fixup.sql” file which changes the blog name so it’s obvious when I’m in the WordPress admin panel that I’m working on QA not production. This file could contain any number of customizations what you want to do on your QA site.

*UPDATE – Warning for Worpdress.com Stats/Jetpack Users*

It turns out that it’s a bad idea to have wordpress.com Stats running at 2 separate URLs. It can update your URL at wordpress.com with your QA URL. Not good.

I was too lazy to script disabling the Jetpack plugin so I just added another section to my fixup.sql which disables all plugins.

That’s it! A fresh copy of blog when ever I need it, all with a single command.

This is based on WordPress but obviously it would apply to almost any site you would think of.

The key points are

Appropriate find/replace across the entire database using sed

Keeping a copy config file which points the application to the correct database

Setting permissions on your script to 700 so it’s not world readable

Making the QA site non-public so that Google won’t index it. Very important for SEO

Making sure you’re very careful about database and path names when you are deleting files or dumping tables

Note for Mystique theme users

For some reason Mystique resets its theme settings after being copied to a different location. For the life of me I couldn’t figure out why. I don’t know how it knows after everything has been “find/replaced”! Anyway I worked around it by calling the site wget then re-inserting the settings into wp_options table in my fixup.sql file.

A couple of weeks ago I posted about the Inkjet/Toner Hybrid PCB printing method. Now it’s time to drill the through holes, a job which I never find much fun – I guess that’s why I’ve put it off for a couple of weeks, that combined with the fact that I didn’t really have the right tools. My big old clunky drill press really isn’t up to the task, it has far too much horizontal play and is nowhere near fast enough for 0.8mm drill bits.

I would have loved to pick up a Dremel 300, Dremel Drill press and Dremel mini-chuck. Dremel make fantastic tools but all up it would’ve cost me over $200NZD which I really didn’t feel like spending just to drill some tiny holes. I already had a pretty decent Makita power drill so I decided to see what I could knock up from junk I had in the garage. Here’s what I came up with.

Materials Used

Various scraps of wood and MDF I had lying around the garage

Rack mount server rail (Cut in half) from an old Dell server – this provides the slide mechanism

A bunch of wood screws I had in my collection

Mini chuck (This was the only thing I had to purchase – TradeMe $17NZD inc P&P )

Short tension coil spring from a box of bit’n’pieces

Tools Used

Table saw

Hand saw

Hack saw

battery drill

Screw drivers

Google sketchup

That’s it! ~75 tiny holes later and no bent or broken drill bits! It’s nice and accurate but very loud!

After a fair amount of trial and error I’ve got a reasonable method for printing a PCB with a silkscreen using a modified inkjet printer. This isn’t a new idea but I haven’t really seen the inkjet/toner hybrid method detailed so I thought I’d share my experience and tips. The great thing about the hybrid method is that it stands up to Cupric Chloride etchant which is helpful when you don’t have access to Ferric Chloride.

The videos are a bit long but I wanted to share some of the detail, I hope it’s helpful.

This method comes with 2 warnings (The later you should take seriously)

Acid is dangerous

Epson inkjet printers are dangerous (for your sanity)

Part 1

Part 2

Here are some shots of the printer modifications viewed from the rear of the printer. It’s a bit of a pain to get a hacksaw blade in and cut the carriage but fortunately you only have to cut it in 2 places.

Left side cut

Left side spacer washer

Right side cut and spacer bracket.

And here’s a shot of the board just after etching..

…and of the silkscreen

Hopefully in the next few weeks you’ll be able to see the project for which this board is intended. No prizes for guessing what it’s going to be!

Using PowerControl with one of the above and a simple relay or solid-sate-relay arrangement you can control any appliance in the house.

PowerControl operates in a server/client architecture. It consists of the following components:

PowerControl Server Service (Server)

This windows service is responsible for controlling hardware, whether it be attached via Parallel port, print server or K8055. The PowerControl Config tool is used to add and remove devices which are controlled by this service. The PowerControl service can be managed from any PC on the network using the PowerControl Config tool (All tasks except configuring the local service instance, this must be done by running the config tool locally on the PC which runs the service).

PowerTray (Client)

PowerTray provides a system tray menu which is used to interact with a PowerControl server to turn devices on and off. It can interact with any number of PowerControl servers and can be run on any number of PCs on the local network. This means multiple PCs have access to the same devices. The PowerControl service maintains the state of these devices and updates each instance of PowerTray. For example You might turn on a Lamp from PC1 then turn it off later from PC2.

PowerCmd(Coming soon!)

PowerCmd is a command line tool which allows you to interact with a PowerControl service. This allows you control devices from other programs and scripts.

I like to use VMwaer Player to compartmentalise different tasks on my laptop. For example I have a Visual Studio development virtual machine. It’s got all the SDKs and tools I need for development. This has a couple of advantages:

It keeps things tidy, I don’t have to have all sorts of SDKs and tools on my local OS, at the same time my development environment is also kept clean

The development environment is very portable. If I want to rebuild my laptop I can just backup the entire VM and I don’t have to worry about setting it up all over again

This only issue I’ve had is that I’ve never been able to get trackpad scrolling to work within the virtual machine. This makes VS development a bit painful!

One of the new features of VMWare Player 2.x is “Unity”, it allows you use apps within the guest VM as if they were running on your local OS. Once you enter Unity mode you will see the Virtual Machine window minimnise and all the apps that you had running inside the VM appear in your local taskbar. It’s a bit like the way Citrix seamless window app publishing works. In Unity trackpad scrolling works!

Now you get the best of all worlds – compartmentalisation of environments with a native OS feel and trackpad scrolling!

After about 2½ years of constant use my trusty Hauppauge HVR-2200 dual DVB-T tuner card started to fail. Just for fun I bought a dirt cheap USB receiver off eBay.

Total outlay was $15.09 NZD shipped. That’s $10.33 for everything you see above except the MCX to ‘F’ adaptor which was $4.77. It took almost 3 weeks to arrive from China. Given the price, I wasn’t really expecting much, but it was so cheap I thought it was worth getting it just to play around with.

Apparently there is a variety of different chipsets used in these devices, all of which come wrapped in the same outer casing. One difference between the various types is the LED, on some of them it’s red, other’s its clear. From what I’ve gathered these are the possible chipsets:

Intel CE6230 (Intel CE9500 reference design)

e3C EC168

Afatech AF9015

I got the one with the Afatech AF9015. I didn’t want to plug it straight into my nice, clean, recently rebuilt HTPC server in case the drivers got all ‘tangled up’. Instead, I plugged it into my laptop running Windows 7 Pro x64. Windows immediately detected the device as a Leadtek WinFast DTV Dongle Gold. I was happy to see this, because I had no intention of using the bundled driver mini-CD – who knows how old and buggy that driver is!

The device installed cleanly without any issues, so I decided to to plug it into my HTPC server which is running Windows 7 Ultimate x64. Of course I expected the device to install automatically, as it had on my laptop, but it didn’t, despite using the search online option. Fortunately I now knew that the Leadtek driver was compatible, so I went to the Leadtek site and downloaded the driver. As always, I extracted the setup package using 7-zip which gave me the plain driver folder as opposed to running the setup, which will install who knows what!

Windows found the driver and installed it correctly!

After restarting the MediaPortal TV Server service, it detected the device and I set it to highest priority so I could put it though its paces. The first time it tunes a channel after a reboot it takes approximately 30 seconds, but subsequent channel changes are quite fast ~3 seconds.

I’m pleased (and surprised) to report that this cheap little tuner has been working flawlessly for an entire week. How long it will last remains to be seen! For $10 I can definitely recommend this device – the only issue is that you might get a totally different chipset; that is, a totally different device for which none of the above would apply. If you purchase from the seller I linked to at the beginning of the post there’s a good chance you’ll get the same device.

The included antenna and remote are complete rubbish and will quickly find their way to the bin!

Against my better judgement and for no other reason than the fact that I wasn’t running the latest version I decided to upgrade the firmware on my Samsung series 6 LCD (LA40B650) . Version 1007 to version 2002. Now you’d think that a firmware downloaded from the New Zealand Samsung site would be suitable for a New Zealand TV, but after upgrading, the country was set to Australia and was locked there. Now the problem with that is that in Aussie they use 7Mhz for DVB-T whereas we use 8Mhz.

This is how I got it fixed. (*Use at your own risk*)

*EDIT*

If you have this problem, make sure you’re on the tuner input when you try to change the country in the settings. It seems like my whole problem might have been because I was on the HDMI1 input while trying to change it!

*EDIT*

I went in to the service menu.

INFO, MENU, MUTE, POWER

The TV comes on with the service menu displayed. I did a factory reset. When the TV came on, it displayed the setup wizard, at which point you can select the country – unfortunately New Zealand isn’t an option. I selected “others“. I then went back into the service menu and changed the region from Asia_ATV to Asia_DTV. The TV is now locked to Australia again! But now I can change the country code by entering a code. I just entered 000. It then let me choose Australia, New Zealand or Singapore. Done!

On a side note: I’ve tried for several hours to get RS-232 control (i.e. power on/off change input etc) working, but to no avail. I know the serial cable is working because when I put the RS-232 port into debug mode (via service menu) I get plenty of debug messages in the terminal window. If anyone else has had success please let me know via comment.

I came across this today. I’m not sure what’s causing it or how to fix it and probably won’t get a chance to dig any deeper but thought I’d comment anyway.

A Citrix web client sitting behind a Belkin F5D8635 wireless/ADSL router doesn’t get his local client drives mapped in his session. This is seen with several Citrix client versions, server version MPS 4.0 an 4.5.

This has been observed on two separate Belkin routers of the same model.

I’ve also noticed that trying to establish a Microsoft PPTP VPN connection from a client behind this router also fails.

* Update *

Belkin has now has a pre-release firmware which they say fixes the PPTP problem. I haven’t tested it yet so can’t confirm whether it fixes the Citrix drive mapping issue.

This one had dogged me for ages. There are a number of possible solutions out there but none helped my situation. Today I finally cracked it!

Here’s the situation:

ISA 2006 standard edition in a perimeter network

Published HTTPS apps (MOSS 2007 + other misc ASP.NET apps)

Weblistener with forms based authentication configured to use LDAP with secure connection and without GC

Internal CA used to provision certs to domain controllers

Password management features enabled (Hence the requirement for secure connection and no GC)

When users login they get a delay of up to a minute before the authentication screen disappears and their application begins to load. I did some traffic dumps and found that this delay was occurring during the TLS handshake between the ISA server and the LDAP server (domain controller).

Immediately after the LDAP server issues the “Server Hello” there would be a delay, or what appeared to be some kind of a time-out, of about 15 seconds. This occurred several times throughout the authentication process which resulted in the long delay. I confirmed this by disabling the secure connection to the domain controller. This got rid of the delay but of course this wasn’t an option because without SSL you can’t have password management features enabled.

After much digging I eventually ran process monitor on the ISA server and found that there was a RSA machine key which the firewall service (wspsrv.exe) was trying to access which it didn’t have permission to.

Every time it tried to read the key it introduced a few seconds delay as can be seen in the time column above.

The solution: Give the NETWORK SERVICE account read access on that file (not the whole folder). Of course the file name will be different for each installation so you’ll need to use ProcessMon to find out which file can’t be read by wspsrv.exe.

So this raises a few questions for me, some easier to answer than others:

How, when and why are these machine keys are used? *update* These are private keys for certificates installed on the server. They are used to encrypt data which is later decrypted using a public key. They are called “machine keys” because they are for the local “machine” account, as opposed to a user account who’s keys would be under C:\documents and settings\username\…

How and by what are these machine keys created? *update* When you create or import a certificate e.g Public certificate for SSL web publishing.

Is there a specific key associated with an application? *update* The private key is associated with a certificate. Use FindPrivateKey.exe to help you match keys to certificates

Why doesn’t ISA server set the correct permission on the machine key file during installation? *update* Not really an ISA fault – ISA runs as NETWORK SERVICE so if that doesn’t have access neither does ISA. I’m still not sure how the permissions get messed up.

Why doesn’t ISA throw an error to the event log when it couldn’t read the file?

Why does everything still work even though it can’t read the file! (albeit with significant delay)?

After breaking my head over this for about 3 days and finally coming up with a solution I thought Id share.

Here’s the situation:

2 Forests with a one way external trust between them.

Domain B in Forest B trusts Domain A in forest A.

An ASP.NET Application running in Domain A

Configured to impersonate.

Application Pool running under a service account from domain A which is trusted for delegation

Application URL (DNS name) is registered as an SPN against the above service account used to the application pool.

A user (Bob) from Domain A has permission to modify properties and reset passwords of users in Domain B

When using the Active Directory users and computers MMC Bob is able to reset passwords and modify properties of users in the trusting domain(Domain B) I.e. Permissions are ok!

When Bob uses the ASP.NET application (System.DirectoryServices.DirectoryEntry) Bob is able to modify properties of users in the trusting domain. I.e. Bob’s creditneials are sucessfuly delegated to the domain controller in the trusitng domain. I.e. Delegation is working!

When the ASP.NET application specifies Bob’s username/password in the DirectoryEntry object e.g. DirectoryEnty de = new DirectoryEntry(“LDAP://domainB.com/CN=Steve,ou=test,dc=domainb,dc=com”,”bob”,”password1″) then invokes the setpassword method, the password is successfully set. I.e. SSL is working correctly. Necessary firewall ports are open etc

When Bob uses the ASP.NET application and invokes the setpassword method now using delegation a COMException 0x80072020 is thrown.

I’ve spent some hours trying to work this out, network dumps with WireShark (no good for LDAPS), Kerberos logging on DCs and webserver, etc etc. I have also opened up the firewall to allow DirectoryEntry.Invoke(“setpassword”) to try his other methods. I just can’t get this to work in a domain trust environment with delegation. I have however found that using System.DirectoryServices.Protocols to reset the password in a trusting domain with a delegated credential does work! For this I’m very relieved; I was almost at the point where I was going to go back to our dev guys and tell them to switch back to using a superuser account instead of the delegation method which I had been pushing on them for weeks!

Frustrating though that I still don’t know why this method works where the standard System.DirectoryServices method doesn’t!

This example was taken from chapter 10.16 ofThe .NET Developer’s Guide to Directory Services Programmingby Joe Kaplan and Ryan Dunn. I strongly recommend you get this book. It’s not the newest book out but it’s still packed with really relevant stuff and lots of “gold nuggets” from 2 guys who really know their stuff.

I came across this one today when one of our web apps in the perimeter network stopped working for external users after a switch failure on our internal network.

We run split DNS so if you ask an internal DNS server for the IP address of webapp.ourdomain.com it will tell you the private address of the perimeter webserver but if you ask a DNS server on the internet you will get the public address which is NAT’d to the ISA server which publishes the app. Now if you ask a DNS server in the perimeter network he will forward the DNS request to an internal DNS server. If the internal DNS server is unavaliable the perimeter server will use recursion to resolve the address and ultimatily end up resolving and caching the public address of the webapp. By now you can probably guess what happens.

Internet user connects webapp.ourdomain.com which is resolved to 203.271.47.15

The connection is NAT’d by our external hardware firewall and received by the ISA web listener / publishing rule.

The ISA server resolves the name in the “To” section of the web publishing rule using a perimeter DNS server. The address is from the internal domain (ourdomain.com) so the perimeter DNS server tries to forward the request to an internal DNS server, this fails so the perimeter DNS server uses recursion to resolve the name and returns the public internet address instead of the private address of the web server in the perimeter network. We now have a loop which results on the above error being logged.

There are a couple of ways to deal with this.

Disable recursion on the domain forwarding in the DNS server settings:

Explicitly specify the IP address in the “To” section of the publishing rule.

Another one of those “Only in this exact and unlikely situation” type posts but oh well!

I’ve realised that I’m just not going to have time to complete my MOSS back-to-back series in the way I started out in Part 1 so instead I’m going to combine everything in a single post based around my original documentation and go from there. Please feel free to ask questions, I’ll answer them to the best of my ability and continue to add detail to this post.

I found the Microsoft SharePoint extranet deployment documentation pretty lacking; from what I could find they don’t go much further than: “here are a few ideas, have fun!” Well I did have fun! And as always with these things the devil is in the details.

Objectives

Primary objective – Making SharePoint available over the internet

A single URL for external users and internal corporate staff regardless of where they access SharePoint from

External member accounts contained in a separate domain to internal corporate staff accounts

Access secured with publicly trusted SSL certificate

Full backend access and functionality for corporate staff

Secondary Objectives

Implement single sign-on, forms based authentication and reverse proxy for all web-based applications hosted in the perimeter network

Guiding Principals

Internal users logged on to internal workstations are seamlessly authenticated to the MOSS site

Internal users accessing the MOSS site from the internet are presented with a forms-based logon screen; they authenticate using the same username and password as they do for internal workstation logon

The same URL is used whether the MOSS site is accessed from internal network or the internet

Perimeter (non-staff) users access the MOSS site from the internet using an account from the perimeter domain

Both internal and perimeter users can reset their password from the forms-based logon page

Once users have logged on they can navigate to other applications without having to enter their username and password again.

The perimeter domain trusts the internal domain

The internal domain does not trust the perimeter domain

MOSS WFE applications in the perimeter can use Kerberos to delegate credentials of internal users to other applications on the internal domain, so internal users have access to all the same resources regardless of whether they are access from the internet or from an internal workstation

Solution Overview

The solution is built on a back-to-back firewall topology, it uses forms based pre-authentication and server publishing technologies available in Microsoft’s ISA Server 2006. While the hardware firewall is the first point of entry from the internet the MS ISA firewall in the perimeter network should be considered the “front firewall” as it deals with server publishing and user authentication.

From a MOSS 2007 point of view the Microsoft Split back-to-back topology scenario has been followed; MOSS web front-end servers are located in the perimeter network while the application and database servers reside in the internal network.

A one-way Active Directory trust exists between the perimeter domain and the internal domain. This is a non-transitive trust where the perimeter domain trusts the internal domain but not the inverse. This serves two main purposes; it allows windows authentication and delegation to occur between services in the perimeter domain and the internal domain and at the same time protects sensitive data such as payroll databases on the internal network.

The existing internal network
Forest with 2 domains:

corpforest.com (empty forest root domain)

corp.com (internal production domain, contains accounts for all corprate staff)

The new perimeter network
Forest with a single domain:

perimeter.corp.local (perimeter production domain, contains accounts for all members and external collaborators)

Authentication

Identity and Single Sign On

Everyone who accesses applications in the perimeter falls in to one of the two following categories; these categories determine which domain the user account is created in.

Internal Corporate Staff:
Users that require access to perimeter applications such as SharePoint but also require access to internal resources/applications such as Citrix, payroll / finance databases, and Exchange Email. These users reside in the current corp.com domain

Members:
Users that require access to perimeter applications such as SharePoint but do not require access to internal resources/applications. These users will reside in the new perimeter domain (perimeter.corp.local). External vendors / collaborators also fall in to this category

The ISA server in the perimeter domain is configured to support single-sign-on of users accessing applications on the corp.com DNS name space. This means that a user who logs in to one application will be able to follow a link to another without having to re-authenticate. E.g. A user logs on once to sharepoint.corp.com and can then hop to outlook.corp.com then to citrix.corp.com all without having to re-enter their username/password.

Forms Based User Authentication

Users who access web applications from the internet must first authenticate at the front MS ISA firewall. When a user attempts to access a web application (e.g. SharePoint – https://sharepoint.corp.com) in the perimeter network he will first be presented with a forms based (HTML username/password) authentication page at the ISA server. At this point the user can enter a credential from either the perimeter domain or from the internal domain. Once the user’s identity is validated the ISA server will proxy the application to his browser.

Authentication Protocols

The diagram below outlines the steps taken to authenticate and or delegate user credentials. Basic authentication is used from the ISA server to the published destination IIS server. This is done because cross forest Kerberos delegation is not supported by ISA server. Even though the IIS server is a member of the same domain as the ISA server, the IIS application pool is running under a service account from the internal domain, which is done to facilitate windows authentication from the application to the backend database server on the internal network.

The basic credentials are secured using SSL between the ISA server and the destination server.

Once the basic credentials reach the published IIS server it will use Kerberos to authenticate the user against the appropriate domain. Using Kerberos at this stage means that user’s credentials can be delegated to a backend server e.g. SQL Server Reporting Services. The result is that internal users get exactly same experience outside the network as they do inside.

Configuration

Network Configuration

As shown in the diagram a route relationship is defined between the perimeter and internal network instead of a NAT relationship, this is required to facilitate the domain trust relationship and Kerberos authentication between the two domains.

Routing

All perimeter servers specify the front firewall (ISA02) as their default route

To maximize routing efficiency a persistent static route from the perimeter network to the internal network via the back firewall (ISA01) is added to all perimeter servers. The following command is run during initial server build.

Public access from the internet is NAT’d via the hardwre firewall. The diagram below shows the complete network path taken by a user when accessing the a web app from the Internet. Note the web listener listens on multiple IP addresses. Multiple web publishhing rules use this web listener.

Back Firewall

Firewall Policy

This following table shows the firewall policy on the back firewall (ISA02) that relates to communication between the internal network and the new perimeter network e.g. MOSS frontend/backend/database access. Kerberos, LDAP etc.

This represents the bare minimum of rules as they relate to this document, you’ll certainly have your own additional rules.

Click for an MS Word Version

Front Firewall

Web Publishing Rule Configuration

A single ISA SharePoint publishing rule is used to publish the load balanced MOSS web front end servers. Configuration can be considered default as per the MS ISA SharePoint publishing rule unless detailed below.

Destination server (To)The FQDN of SharePoint (sharepoint.corp.com) is specified; this resolves to 172.16.1.50 which is the virtual IP address of the load balanced cluster of MOSS WFE servers. We choose to forward the original host header since the internal and external names are the same.Requests must appear to come from the original client otherwise load balancing based on IP address won’t work.

Listener

The listener selected (Perimeter Web Listener) is used by all FBA based applications in the perimeter network; this enables SSO functionality.

Public Name
The public DNS name used to match the publishing rule.

Bridging
Connections to the internal servers are only made via SSL, and on the default port (443).

Authentication Delegation
Credentials are delegated to the target server using basic authentication. Detailed information on why basic delegation is used can be found in the Authentication Protocols section under Solution Overview.

Web Listener / Pre-authentication

All IP addresses that the web listener will listen on are selected here. There is a separate address defined for each application that makes use of this web listener. Traffic is NAT’d by the hardware firewall from public IP addresses to these DMZ addresses. e.g. the public IP address resolved by sharepoint.corp.com is 258.17.84.142 and is NAT’d to 10.1.1.142

Certificates and SSO

An individual certificate is assigned to each application’s DMZ IP address. This allows multiple SSL secured applications to use the same web listener which consequently means that single sign on can be used across all these applications.

Single sign on is enabled for applications on the corp.com DNS suffix.

Connections

http is redirected to https. The good thing here is that http links that that get emailed around when people are working at the office also work when they access from the internet.

Forms

The path to the customised html logon form is specified here as well as the options to allow users to change their passwords and password expiration warning.

Authentication

When users first request an application that is using this web listener they must first be authenticated by the MS ISA server before they are allowed to proceed to the requested application. HTML forms based authentication is used for this initial step. This is where user enters their username and password.

As shown in the screenshot LDAP is used instead of native windows authentication; this is due to an issue with password changes in a cross domain scenario. During testing it was discovered that when using Windows authentication it is not possible for a user to change their password at the HTML logon page if their account is in a different domain to the ISA server. This is problematic especially for seldom used accounts where the password expires or for new accounts where the administrator needs to force the user to change their password at next logon. Microsoft ISA 2006 SP1 introduces a bug which also affects the change password functionality when using LDAP authentication, a non-public patch has been released to fix this bug; for details see the known issues section of this document.

Two LDAP server sets are used, one for the perimeter domain controllers and one for internal domain controllers.

Logon expressions are mapped to the LDAP server sets. How users prefix their username determines which LDAP server set is used to authenticate them. The following config uses “*” wild card to allow the concept of a default domain. If the login starts with “corp\” then the internal LDAP set is used other wise it default to the perimeter set. This means that anyone who has an account in the perimeter domain doesn’t need to specify a domain.

Two LAP servers (domain controllers) are defined for each LDAP server set. For password change functionality to work, a secure connection must be used (LDAPS) and the “Use Global Catalog” option must be turned off. A lookup account in the target domain must also be made available for the ISA server. This is a non-privileged account with a very strong, non-expiring, set/forget password which is set during configuration.

A secure SSL LDAP connection requires that port 636 be open to the internal domain controllers on the back firewall, this is noted under the “back firewall” configuration earlier in this document.

A server certificates must also be installed on the domain controllers and must be trusted by the ISA server. This can be acheived by setting up a CA (certiificate Authority) but that’s out of scope for this post.

I will however point out a good tip for testing that the ISA server is correctly trusting the DC and is able to make an LDAPS connection. Use the ldp.exe tool from the Windows Server 2003 resource kit and make a connection on port 636 with SSL enabled. If everything is working correctly then it will connect without error otherwise it will throw some kind of TLS error.

Active Directory Trust Configuration

A one-way domain trust is configured between the corp.com domain and the perimeter.corp.local domain. This is done to allow internal accounts to be used in the perimeter domain which inturn enables windows authentication to be used when accessing backend resources such as SQL databases. This trust means that deploying MOSS web front end servers is as straight forward as adding the web front end role, all IIS configuration is completed by MOSS and remains valid in both the internal and perimeter domains.

Now you might be wondering why we don’t use a forest trust. Well the truth is I had issues with using a forest trust and if anyone can shed any light on this I’d be very interested. I’d love to set this back up in the lab but I can’t see my self getting time.

Here’s the situation. On the internal side we have an empty forest root domain ofcorpforest.com inside that forest we have the internal production domain calledcorp.com. On the perimeter we have a single forest/domain called perimeter.corp.local. We create a one-way forest trust (which is transitive) between corpforest.com and perimeter.corp.local. We then take a member server in the perimeter.corp.local domain install MOSS and join it as a webfront end to the MOSS farm based in the internal network. This will automatically create IIS MOSS web apps and application pools. These application pools are set up identically to the ones on the internal MOSS servers so of course are configured to run under internal service accounts.

Everything seems to be working correctly the application pools run fine but when trying to browse to the site IIS writes an error: “The caller is not the owner of the desired credentials”. Despite many hours of digging through logs, traffic sniffing and bashing my head against the desk I was unable to get IIS to work with a service account from another domain when using the forest trust, except if I used an account from the root domain of the trusted forest which is not ideal as all internal MOSS IIS servers would need to be re-configured. I should mention that this is not an issue specific to MOSS, it’s an IIS issue in general. I stood up another IIS/ASP.NET/Visual Studio test box in the perimeter and it had the same problem.

If both your internal and perimeter domains are single forest/domain I expect you might not have any issues and I’d be very interested to hear your results.

SharePoint

This section only describes MOSS configuration that relates to the perimeter network deployment.

People Picker

To enable the “PeoplePicker” in a one-way trust scenario to search both domains (perimeter and internal) the following stsadm commands must be run.

(Where “Password” is the password for the unprivileged service account “PeoplePickerService” used to perform lookups on the perimeter domain)

AAM (Alternate Access Mappings)

Zones

The following access mappings / zones are configured

Internet users only access sharepoint over SSL in the default zone via https://sharepoint.corp.com

Internal users access sharepoint over plain http in the intranet zone via http://sharepoint.corp.com. This is the default home page for all internal users as set by group policy. Plain HTTP is used for internal users to ensure optimum performance especially across the WAN where the Certeon accelerators are deployed. The URL http://sharepoint is also valid for internal users.

Some SQL Server Reporting Services (SharePoint Integrated Mode) features only work when accessed in the default zone hence internal users wanting to access these features will need to access sharepoint in the default zone via https://sharepoint.corp.com. See the known issues section for more information.

Known Issues

MS ISA Forms Based Password Changes

Valid Account Discovery Vulnerability

A patch in ISA 2006 SP1, as means to fix a security vulnerability broke password change functionality when using LDAP authentication. (http://support.microsoft.com/kb/957859/). This has been fixed by non-public patch KB959357 (http://support.microsoft.com/kb/959357). Unfortunately this patch re-introduces the security vulnerability. This Vulnerability means that when an incorrect password is entered for a valid account and the account is in a password-expired state, a change password form is displayed; while the correct old password must be specified before a new one can be set this could allow an attacker to discover that an account name is valid by brute force. This can be considered a low risk vulnerability which is out weighed by the need to allow users to change their own password. Hopefully Microsoft will fix this issue in the next major service pack. Below is a summery of the vulnerability behaviour.

Password length / Complexity Policy

When a password is changed using the ISA FBA change password tool the password complexity is checked against the domain that the ISA server is a member of. This means that both the internal and perimeter domains must have the same password complexity / length requirements to ensure consistent behaviour for end users. -I need to confirm this one!

SSRS in SharePoint Default Zone Only

When using SQL Server Reporting Services in SharePoint integrated mode some methods of viewing reports are only supported in the default zone. For example, if you try to open a report from a document library while accessing the sharepoint on http://sharepoint.corp.com (intranet zone) instead of https://sharepoint.corp.com (default zone) you will be presented with the following error.

The specified path refers to a SharePoint zone that is not supported.
The default zone path must be used.

The report viewer webpart works correctly regardless of what zone it is accessed in.

I’ve been a bit slack with my blog lately, partly because in October we bought our first house so that’s been taking up a lot of my time. It’s a good solid 1950’s house but VERY original so it needs a LOT of work.

Me plastering the back room getting it ready for painting.

From network engineer to home handyman / plasterer / carpenter! Don’t worry though I’ve got my priorities straight! Structured cabling and network cupboard is almost complete. I’m quite pleased with how it’s turned out so decided to put up some photos.

Cables come up from under the floor into the wall cavity

Fortunately there was a little wee open-cupboard off the hall. It’s a good central point to run all the cables back to. I’ve installed a total of 16 network ports. 6 in the lounge, 2 the dining room and 2 in each of the four bedrooms. The cable is CAT6 and is all run under the floor. I’ve created 3 channels by running 30mm thick strips of pre-dressed pine from top to bottom of the cupboard.

Left channel with bottom capping section installed

The left channel carries the CAT6 up from the floor to the patch panel. It is also used to carry alarm wires down from the ceiling. It has notches which accommodate 3.5mm plywood capping. The right channel is also capped and will be used for carrying power cables. The centre channel is left open and used for running cables between the shelves.

I finally got around to fixing the auto play on start-up problem. The reason it took so long was because I was trying to perfect the auto restart option:

Sometimes when an AC3 file is played or something that takes control of the sound card SPDIFkeepAlive gets stopped. The auto restart option restarts playback every 3 seconds. This means that shortly after AC3 playback stops SPDIFKeepAlive resumes automatically.

Unfortunately I’ve still not got it working properly, at least not for me, depending on your sound card / driver you might get better results. But be warned, on my system after a random amount of time (~1 hour) I get a terrible noise produced by SpdifKeepAlive and I have to exit, hence I don’t use this feature! I’ll try to fix it sometime. I think I’ll need to use threading and play 2 files that continuously overlap.

There are a number of reasons you might see this message but in my case it was because the server I was connecting to was behind a firewall and in different domain to the one which my account was in.

When you logon via RDP, “Terminal Services” will contact the domain which your account is in to query terminal services information about your account e.g. profile path. It does this using RPC to a domain controller.

In my case the server concerned was in the perimeter network and there was no way I was going open RPC on the firewall to allow it to talk to an internal DC. And since the purpose for RDP to this server was purely for administration I really didn’t care if it couldn’t get my profile info from AD.

Fortunately there is a workaround as described in this Microsoft article, actually the article refers to a different problem, but the workaround is the same.

There are a lot of articles out there on setting up Kerberos Service Principal Names but today I’m going to make it simple. Bear with me as I start off with the basics; by the end of the post it will all be very clear.

Throughout this post I’ll make reference to a scenario of a client computer connecting to an SQL server called sql1.domain.com however the same applies for any service, for example a web server where the client connects via HTTP.

The SQL server service is running under a domain service account called “domain\SQLSVC“. No SPNs have been set yet.

The Basics

Active directory user and computer accounts are objects in the active directory database. These objects have attributes. Attributes like Name and Description.

Computer and User accounts are actually very similar in the way they operate on a Windows domain and they both share an attribute called ServicePrincipalName. An account object can have multiple ServicePrincipalName attributes defined.

The setspn.exe tool manipulates this attribute. That’s all it does.

The Failure

The client wants to access the SQL server so he asks his domain controller: “Please may I have a ticket for accessing MSSQLSvc/sql1.domain.com”

Now the domain controller asks the active directory database: “Give me the name of the account object who’s ServicePrincipalName is MSSQLSvc/sql1.domain.com“

The active directory database replies: “Sorry, there are no account objects with that ServicePrincipalName”

All computer accounts have, by default ServicePinciaplName attributes set to:HOST/[computername] and HOST/[computername].[domain]

So the active directory database replies to the domain controller: “The account object that has that ServicePrincipalName is sql1.domain.com’s computer account“

The domain controller now creates a ticket that only the computer account of sql1.domain.com can read. He gives the ticket to the client.

The client goes to the SQL service on sql1.domain.com and says “here is my ticket, may I come in?”

The SQL service will attempt to read the ticket. The problem is, the SQL service is not running under the computer account; it is running under a domain service account. It can not read the ticket; the ticket is only intended for the computer account of sql1.domain.com. Authentication fails (falls backto NTLM).

The Fix

Now lets run the setspn.exe tool to manipulate the ServicePrincipalName attribute of the SQL service account.

setspn -a MSSQLSvc/sql1.domain.com domain\SQLSVC

We will also add sql1 (without the domain name) in case we want to access the the server without the domain name appended.

setspn -a MSSQLSvc/sql1 domain\SQLSVC

Now run through the scenario again and this time notice that the domain controller will return a ticket that the SQL server service account can read.

Obviously this is heavily paraphrased but hopefully it helps you understand the reason for setting the SPN attribute on the account that runs a given service. Of course if the service runs under the local NetworkService or LocalSystem account then everything will just work because these local accounts represent the computer account in active directory.

As described a couple of posts ago I have set up an Ipaq 2210 in my car for the purpose of GPS navigation; it’s been working very well for a few months now but something stated to bug me; every time I got in the car I had to manually power on the Ipaq. I know it seems trivial and lazy but I just think such a device should be considered part of the car’s instrumentation, it should just sit there doing its thing; you wouldn’t want to have to manually turn on your speedo, rev counter or heat gauge would you?

After much digging I found that the Ipaq (the 2210 at least) could be woken up by applying 5volts to the DCD pin (pin 6) on the base connector; supposedly through a 4k7 resistor although that didn’t work for me so I just fed the 5v straight in (so far so good!). Since my power supply only applies power to the Ipaq at ignition I was able to just take a wire from the power-in on the Ipaq connector across to pin 6 (DCD). Now at ignition the Ipaq gets its 5volts for charging and the DCD gets 5volts to wake it up. There is just one problem; well two actually! The first is simple to fix; when the Ipaq is woken by DCD it tries to sync so the GPS application loses focus. There is an option somewhere in control panel to prevent this behaviour. The second problem is not quite so straight forward. When you turn the key in the ignition you get power for a second (DCD wake-up is triggered) then power is cut for a few seconds while the engine is cranking. Apparently the Ipaq won’t wake up in these conditions! So for this to work you need to turn the car on, wait for the Ipaq to wake up then start the car. Hardly seamless!

Since you’re reading this you probably already know that Internet Explorer has a number of security zones. URLs are treated differently depending on the zone they fall into. These security zones apply not just to URLs in Internet Explorer but to windows in general e.g. accessing files from a network location. My specific problem was a GPO start-up script that ran backinfo to display the server info on the desktop when an admin user logs on. Backinfo.exe an unsigned application stored on the netlogon share would throw the Open File – Security Warning every time it was launched. More about that soon.

In the enterprise it’s desirable to configure all these zone and security settings using group policy but there are a few gotchas that can make the configuration and testing process a bit confusing.

A standard zone template can be applied to a user’s settings. After you apply this template you can do a gpupdate /force/target:user; you won’t get a warning about logging on/off. Now in Internet Explorer you’ll notice a couple of things. (1)The security level and visual slider for the zone on the security page will not have changed and will not reflect the template you’ve selected in the GPO. (2)If you click on “Custom level” you’ll see that the individual settings that the selected template represents are in fact set and are now unchangeable, i.e. the policy has applied.

Ok so at this point we could be forgiven for assuming that the policy has been fully applied to the system; we can see the changes in IE and we know that gpupadte didn’t ask us to log off/on.

Now on to the “Open File – Security Warning “, this is affected by the setting pictured above, “Launching applications and unsafe files”. Since this is a trusted zone we trust all the locations in this zone so we are happy to launch unsigned applications without a security warning. For some strange reason this setting is one of the only ones that can’t be set individually with group policy, the only way to set it (via GPO) is to apply a template as described above. Both “Low” and “Medium Low” will allow applications to launch from a network location without a security warning.

The thing that’s really confusing is that even though doing the gpupdate updates the policy in IE it is not fully applied to the reset of the system until you log off/on.

In Conclusion

Security templates are not visually reflected in the security page of Internet Explorer even though they are applied.

Security zone settings are applied to Internet Explorer by doing a gpupdate but a log off/on is required to apply these settings to the rest of the OS

The “Launching applications and unsafe files” setting determines whether the “Open File – Security Warning” dialog is displayed when launching applications from a given location

The “Launching applications and unsafe files” cannotbe set with a an indvidual GPO setting. (You could create a custom adm file though)

When setting zone security via GPO I recommend making the Internet Explorer security page invisible to users to avoid confusion as they can still quite happily adjust the security level slider, it just won’t have any effect!

I had a serial GPS mouse lying around (Thanks Alex) and my boss was kind enough to give me a retired HP Ipaq 2210 pocket PC from work. The 2 combined and I had a pretty reasonable touch screen car navigation system. Only problem was a power supply.

I didn’t want to use a cigarette lighter adaptor because I would have wires going everywhere; I wanted it hard-wired so the the GPS mouse sits on the dash with the cable disappearing down behind and a single thin cable coming out from the centre console for the Ipaq. I also wanted the GPS mouse running full time so there was no delay when it was searching for the satellites but I only wanted the IPAQ to be powered when the ignition was on.

I made 3 attempts before I was successful. First was 2 LM7805 5volt regulators. These got way too hot even with a heatsink; I would have needed a huge heatsink. Second attempt was the contents of 2 cigarette lighter to USB adaptors supposedly able to deliver 1AMP; yeah right! These things just about burst in to flames when I turned on the IPAQ! The third and successful attempt uses a LT1074 switching regulator and is detailed below.

The LT1074 was provided as a sample from Linear Technology, which is great since they cost about $NZ40 to order from RS!

The schematic is just the reference one from the LT1074 datasheet.

I couldn’t find exact matches for all the components in the reference schematic.

Here’s a list of parts I used
C1: Electrolytic 470uF (25v)C2: Green Cap 0.01uF (This was a guess! All I really new is that it wasn’t an electrolytic because the schematic shows no polarity symbols!)C3: Low ESR electrolytic 220uF (25v) (The application noste AN35 said to use low ESR and place it very close to the the LT1074).MBR6745: This a SCHOTTKY-BARRIER RECTIFIER DIODE the recommended MBR745 is rated at 7.5Amps I used an ERC81-004 rated at 3Amps. Robbed from an old dot matrix printer PSU.R1:I couldn’t find 2.8K @1% so I used 2x 5.6K @1% in parallel; both 1/2watt.R2:2.2K @1%; 1/2 wattR3:2.7k @5%; 1/2 wattL1: This is of unknown value; robbed from an old dot matrix printer PSU. The application notes AN35 describe a rather humerus “alternate” method of selecting and inductor: (Click to read)

I haven’t done any PCB etching since high school so I thought I’d give it a go. I used Eagle CAD PCB design software which allows boards 100mmx80mm to be designed using their freeware version. Eagle is a bit clunky and counter-intuitive but once you get the hang of it, it’s really very good.

As promised when I posted about the print server power control Hack, I’ve finally gotten around to writing a little windows app to control devices from the system tray. The utility is called PowerTray, it can control local devices or devices connected to a networked computer as long as they are also managed by PowerTray.

PowerTray can also integrate with MyPowerControl in MediaPortal HTPC system. If you’re using this with MediaPortal then only install either PowerTray or the MyPowerControl plug-in in a single computer not both!

2. Run XP setup, create a new partition leaving enough room for the second copy of windows; install Windows as per usual.

3. Install all drivers, updates and any software that you want to be on both copies of windows.

4. In windows Disk management create a new NTFS partition in the remaining space on the disk, leaving enough space for GRUB bootloader, I left 100MB,

5. Run Symantc ghost (or some other disk cloning tool) and clone partition 1 to partition 2. You will now have 2 identical copes of Windows on the same drive.

6. After cloning don’t reboot into windows, instead boot from CD to you’re favourite Linux distro. I used Ubuntu 6.1 which boots live off CD as part of it’s install process; I’m sure you could use knoppix or whatever.

7. Create a Linux partition in the remaining space using fdisk or cfdisk; flag this partition as the bootable partition. Assuming the disk is /dev/hda; XP1 and XP2 are /dev/hda1 and /dev/hda2 then this partition will be /dev/hda3

cfdisk /dev/hda

.

8. Format the new Linux partition with

mkfs.ext3 /dev/hda3

.

9. Make a new directory and mount the partition to. For this example e.g we’ll mount it to /mnt/tmp

mkdir /mnt/tmp
mount /dev/hda3 /mnt/tmp

.

10. Install Grub to the partition. This Will install grub to the root of /dev/hda3 and to the master boot record of /dev/hda

grub-install --root-directory=/mnt/tmp /dev/hda

.

11. Create a grub menu list file in in /mnt/tmp/boot/grub/
Use vi or nano to create a file in this location called menu.lst This will contain a list of operating systems you wish to boot. The file should look like this:

12. Now unmount /dev/hda3, remove the linux CD and reboot. You should now get a grub boot menu where you can choose which copy of XP you want to load. The the hide and unhide commands for each OS entry in grub mean that the that each copy of windows won’t be able to see the other.

13. (Optional) To add a nice background to the to the grub menu, boot back into your live Linux distro and use firefox to download a grub splash screen. I got one from here, they also have a guide to create your own. Again mount /dev/hda3 and copy the splashimage to /mnt/tmp/boot/grub. Edit the menu.list to include the following line:

splashimage=(hd0,3)/boot/grub/myfile.xpm.gz

.

That’s it! 2 Copies with windows compleatly hidden from each other wih a nice menu using the renound GNU GRUB bootloader!

*UPDATE*

DON’T USE HIBERNATE WITH DUAL BOOT.
At first it seems neat to be able to choose which OS you want to resume but then the disk corruption starts!

Power Control Box

It connects to a parallel port allowing me to turn the power points on and off using software. The parallel port allows for up to 8 outputs by using data 0 through 7 (Pins 2 though 9).

I’ve had this box attached to my HTPC for the last few years; I use it to control power to my TV, subwoofer, table lamp etc.

As mentioned in my previous posts I’ve just finished building a new HTPC, and guess what, it has no parallel port! I thought it would be a simple case of using a USB to parallel adaptor but unfortunately these adaptors aren’t seen by windows as standard parallel ports; instead it appears in device manager as a “USB Printing Support” device hence can’t be addressed directly to turn the data pins on and off.

Print Server

After much googling I came across a project by Doktor Andy which uses a network print server to drive external devices. This was perfect since I had a HP JetDirect print server. I wasn’t able to get Doktor Andy’s circuit working with the JetDirect but Boyan Biandov who’s name was on Andy’s site was very helpful and told me how to get the JetDirect working. A single 74LS04 chip is all that is required to invert the strobe output and feed it back into the busy input, I’m not really a wiz with electronics but as I understand it this fools the print server in to thinking that there is a printer attached and everything is “ok”.

* EDIT *

You DON’T need to use the additional chip at all. Fred kindly commented and pointed this out:

Another mod in my quest for a quiet computer. I’ve mounted the hard drive using elastic cord and a cheap auto electrical crimp set. This should reduce vibration / noise transfer to the case.

Use pliers to completely open crimp connector.

Measure out a piece of elastic and tidy up the ends with a lighter. Thread a plastic cover on to the elastic. Place the elastic in the widened crimp connector so it sticks out just a tiny bit, this will cause it to mushroom-out when crimped. Fold up the edges of the crimp connector with the pliers then crimp.

Slide the plastic cover on.

Repeat as needed

Elastic should be stretched just enough to keep the drive held in place.

This could be implemented in a number of different ways depending on your case

New Zealand Freeview has just launched it’s high definition DVB-T TV service and my existing HTPC was nowhere near up to spec for decoding the high def streams. It was also too noisy for a computer that lives in the lounge so it was time for a rebuild. I was pretty excited; this is my first brand new PC in about 10 years the last one was a Pentium 120 when I was still at school! Of course I’ve had plenty of second-hand and hand-me-down gear between then and now.

The two main requirements for the new build were enough power to decode high definition video and quiet enough not to drive me crazy. Quiet means efficient cooling, I.e. good air flow.

I wanted to run the fan at very low RPM while maintaining good air flow across the CPU and video card; the idea is to pull air past the passively cooled video card, though the CPU heat sink and vent it straight out the back of the case.

I could have hacked a duct together with cardboard and tape but that would been just too easy, besides I wanted to try my hand at some fibre-glassing. After much research, trial and error Here are the basic steps I went though.

Materials (Fibre glass bare essentials can be had for about NZ$50)

Polyester resin

Methyl ethyl ketone peroxide (MEKP – The catalyst used to harden the resin)

Polyvinyl alcohol release agent (Used so you can separate your part from the mold)

Release wax

Acetone (For cleaning up)

Cheap bushes

Mixing containers

Latex gloves. (Keep the nasty chemicals from burning your skin, Box of 100 – you have to change them often)

Stirling sticks

Respirator mask

Casting plaster to make the mold (Not used in the end. See trial and error!)

Wood, plywood, tape, misc tools, sandpaper, etc etc

Thanks goes to NZ Fibreglass. They were very helpful; they sell in small and large quantities and took me though exactly what I needed to get started so if your in Auckland and need fibreglass gear it’s the only place to go check them out at:http://www.nzfibreglass.co.nz/

1. Make a mold from wood (and masking tape!).

2. Coat the mold with resin and some fibreglass re-enforcing where strength and shape is needed, around the corners and over the masking tape.

3. Sand the resin coated mold very smooth

4. Wax the mold with release wax; about 6 coats, till it’s very shiny.
The guy at the fibreglass shop was very kind and gave me the last of a tin of wax they had in their workshop; saving $30

5. Brush on polyvinyl alcohol release agent. This stuff is great, it forms a sort of plastic bag-like skin so you can release from the mold. It should really be sprayed on evenly with a proper spraygun but this will have to do.

6. Now ready for the first layer of fibreglass. Mix up the polyester resin with the hardener. Soak the resin into the glass with a dabbing action too much brushing and the fibres will start to be dragged around with the brush. The glass should be saturated and become transparent.

The first layer is done!

7.Now the moment of truth; separate the part from the mold?

Note the PVA film has formed a barrier between the resin and the mold.
At this point I’m wondering if the wax was really necessary.

The part released reasonably cleanly

8.Add more re-enforcing and a top coat of very thin glass tissue. (My homemade roller helps get out air bubbles)

9. Clean-up (sand), add holes for top of heat sink

10. Add bottom sections

11. Lots of sanding to get it nice and smooth and ready for painting

12. Into the “spray booth”….

…Prime and paint

13. Done!

Full System Specs

Motherboard: Intel DP35DP Media series

CPU: Core2Duo E8400 3.0Ghz 45nm

RAM: 4GB Crucial

Video: Passively cooled Nvidia 8600GT (Gigabyte SilentPipe II)

Hard drive: Seagate 320GB SATA

Power supply: Enermax liberty 400(watt)

Case: Lian li PC61 (Big thanks to Chris for this very nice all aluminium case)

I suggest you look at Veg’s SoundKeeper tool first and see if it does what you need. It looks like a much cleaner and more efficient tool than mine (which is now nearly 10 years old! :o). Nice work Veg.

After building a new Home Theatre PC I’ve discovered that the onboard IDT audio has a problem with the SPDIF output, or at least my Sony receiver has a problem with it! Every time a sound is played it causes the SPDIF input on the receiver to initialise which takes about 500 milliseconds, after the sound has finished the SPDIF goes back to sleep. As a result the first 500ms is lost off every sound that is played; not really a problem if you’re watching a movie but for applications that have little blips as you navigate around these sounds tend to get missed altogether; such is the case in MediaPortal the HTPC application I use.

My old motherboard with Nforce sound didn’t have this problem the SPDIF remained “active” all the time.

After much searching I did find a few other people with the same problem but no solution so I’ve written a small .NET application called SPDIFKeepAlive. It does just that. It sits in the system tray and continuously plays a silent wave file to keep SPDIF port active.

When clicking too fast you accidentally denied “Full Control” to “Authenticated Users” for a Group policy you were working on. Since deny takes precedence over allow the results are that you are now denied the ability edit the GPO at all. This includes editing permissions to remove the blundered Access control entry! In the Group Policy management console it Looks like this:

Components of a Group Policy Object

A GPO is made up of two parts; a set of files in sysvol and an Active Directory object. When correcting GPO permissions you must modify the ACL on the AD object using DSACLS (included in the w2k3 support tools) and the sysvol NTFS permission.

The following dsacls command will remove the offending deny ACE from the ACL, in this case “Authenticated Users” from the AD object. The object is named by the GUID that can be seen on the inaccessible object in the GPMC.

If successful this command will return a full list of the permissions for the object

Next up you need to remove the deny ACE from the policy’s NTFS folder ACL. Again the GUID of the policy is used to name the folder:\wordpreessSysvolwordpress.comPolicies{3EE757FE-B5A4-4D23-937D-A3AF5G7F0CEA}

At this point your GPO will be accessible within the GPMC and the permissions will be consistent across AD and Sysvol. All that’s left to do is to add “Authenticated Users” back to the GPO. Do this by editing the GPO with the group policy editor; doing so will apply permission changes to both the AD object object and the Sysvol policy folder.

Just thought this might help someone, not that it’s ever happen to me! ;-p