Event Log: "Failed to setup initiator portal. Error status is given in the dump data."

This is being recorded every 3/100's of a second. We are using MS iSCSI Initiator on Windows Server 2003, Dell 2970 w/4GB (PAE). I am sure that this was configured by Dell initially. I have no idea what changes or mods were made since the company installed this machine until now.

(I'm a new User so the lovely and vibrant screen images had to be removed. They were quite pretty and I am sure you would have been very moved and appreciative of them.)

It appears that everything is installed correctly and the 5TB bound volume is accessible but I have never worked with iScsi before so I plead total ignorance. In searching I have found this to be a fairly sparce and bland documented subject.

I'd like two things...

First, to get rid of the error msg being logged. MS says it can be ignored if everything is working but it chews up resources logging it and I don't feel comfortable about any errors on my servers. I want to correct whatever is causing this problem.

Secondly, being totally green to this, I would like to confirm that the setup is optimized and we are taking advantage of all features available. Although there are 3 NIC's in this machine it appears that the initiator is only configured for the Broadcom BMC5708C NetXtreme II on our 10.90.1.#, the other 2 NICS are 1GB on the 192.168.0.#. Would additional targets improve performance?

If someone who is experienced in configuring the Microsoft iScsi Initiator can help I would really appreciate it since, as I mentioned, everything I have come across has not been of any value at all.

2 Answers
2

I believe you need to take a look at http://support.microsoft.com/kb/972107 - form your fault text I would sumerise that something like intermittent connection timeout is causing the iscsi target to logout and then be unable to log back in, but comes back online realy quickly after. check the iscsi portal device and see if the logs there can help.

the network config looks like you have a dedicated NIC for iscsi traffic - this always good. even better if it's 10GbE, though I suspect that this is 1GbE? If you dont need the other two NIC's on the same LAN you could bond another interface and use LACP to get better throughput.

more targets will not help as you then have more target stacks being sent down the wire.

when designing SAN or storage it's important to work out what you are storing, the expected access profile, the required access profile and the requires resilience/redundancy.

I read the kb but there is nothing in there that really helps, in fact that is where it states "... Therefore, you can safely ignore this Error event." I don't think having an error recorded every 3/100ths of a second is something I want to ignore. I would appreciate your "once over" very much. What config (devices, nics, Navisphere) would you like to see and where can I email it? ... Thx!
–
AZeeOct 9 '10 at 20:59

khushil.dep@gmail.com - can you also let me know where this target is coming from? what kind of SAN are you using?
–
KhushilOct 9 '10 at 21:15

This is not true you can take advantage of multiple NICS for load balancing going to your SAN as long as your SAN is capable of handling Multipath. For example Netapp offers that feature with all their filers as a standard option but it used to be it required an additional lic. If you have multipath setup over each NIC make sure the SAN has the same amount of paths or it is pointless. So if you want to use all 3 NICS make sure the SAN has 3 NICS and an IP for each one available even better if you can put each NIC on a separate vlan but it can be done over the same vlan without issues but configuration has to be done by IP not DNS name. Then inside the initiator(multipath version installed) configure the multipaths and multiple sessions then on the MPIO setup use round-robin. I use 3 NICS at the same time and the traffic is perfectly spread over all 3 paths equally.I have tested LACP and this by far is much better load balanced than LACP. LACP can only use one of the 3 paths at a time in Windows. You could bond the NICS in Windows but typically you will only get 2 gig one way and 1.5 gig the other way when bonding 2 NICS together at least on HP that is. Multipath is by far a way better solution and provides way better performance than LACP.