Check all configuration on device. Found that RRPP working-mode is GB on S7700, but on S57 it is HW mode. Because of different mode, there are different node and configuration on Switch.
Let customer change RRPP working-mode to HW same with S5700 and test it works fine.

Root Cause

1. Wrong Configuration
2. RRPP calculation Issue

Suggestions

1. Changing RRPP working-mode needs to disable RRPP first and delete the old configuration
2. On S5700, RRPP only supports HW working-mode and cannot be configured. S7700 supports both modes.

There are situations where we need to find out the corespondence between the ifindex and the interface names. For instance, if we receive a log on the switch that is refereing to a ifindex number but does not mention the exact interface name, we would need to know this corelation in order to indentify where the problem resides.

To indentify the interface index of the interfaces, I can suggest the following solutions:

–Ru-Run the display rm interface command to check the ifindex of the intenterface directly in the CLI . By running this command the ifindex is shown in hex and you would need to convert it to decimal :

When the issue happened, S5700 stack member cannot be accessed intermittently. Status information about this stack member cannot be obtained through commands and this fault cannot be automatically rectified. After powered off and restarted the stack member, the fault was disappeared.
The command output showed that information about a stack member cannot be obtained. The following uses the display environmentcommand as an example.

S5700

The preceding command output shows that temperature information about all stack members except the device with SlotID4 can be obtained normally. That is, obtaining temperature information about the stack member with slot ID 4 failed.

Alarm Information

None

Handling Process

1. Check the process of obtaining stack member status information in a stack.
In an S5700SI stack, the master switch obtains status information through Remote Process Call (RPC), and stack members exchange data by sending Interprocess Communication (IPC) messages. Because temperature information about a stack member cannot be obtained, a fault occurred during RPC invoking. RPC uses IPC messages to exchange information, so the IPC message exchange process may be abnormal.

2. Analyze the IPC processing flow.
When the stack member was restarted, the software was re-initialized, and the fault was rectified. Therefore, an error occurred during software processing. Additionally, powering off and restarting the stack member can rectify the fault, indicating that the fault occurred on the stack member.
View message queue statistics on the master switch.

S5700

The preceding command output shows that messages were accumulated in VLAN, L2MA, and CXQO queues. The L2MA message queue (MAC synchronization task message queue) was full of messages, indicating that the IPC tasks of stack members were suspended and cannot process IPC messages. As a result, messages were accumulated on the master switch.

4. Analyze the reason for IPC task suspension.
Because the fault occurred on a stack member, we checked the black box of the stack member.

S5700

The preceding command output shows that an infinite loop existed. Detailed information about the infinite loop is as follows:

The task experiencing an infinite loop is DELM, which is used to delete MAC addresses. When an infinite loop occurs, the mv_l2_del_addr_by_port function occupies the semaphore of MAC entries. When other tasks, for example the IPC task, need to operate MAC entries, these tasks will be suspended because no semaphore is available. However, the infinite loop cannot be broken. Subsequently, the IPC task is always suspended, resulting in the fault.
5. Analyze the reason for an infinite loop.
After a code walk-through was performed, messages notifying the deletion of MAC addresses were accumulated in the message queue when a large number of MAC entries were triggered in a short period. Due to a software processing bug, the DELM task was always reading the message queue status when the messages were accumulated. Consequently, an infinite loop occurred on the DELM task.
The infinite loop occurred because of the deletion of MAC entries. After analyzing logs, we found that S5700s often received STP TC messages from Eth-Trunk 5. After an S5700 received TC messages, it deletes MAC entries of the related interfaces.
6. conclusion
When a device was triggered to delete a large number of MAC entries, a software bug caused other tasks unable to apply for the semaphore of MAC entries. The IPC task was then suspended when applying for the semaphore, and the master switch cannot access other stack members.
7. After implement the workaround that run the stp edged-port enable command on the related ports to reduce TC messages, the issue is disappeared
8. The patch for this software bug will be released at the end of July. 2014 to resolve this issue completely.

Root Cause

1. High CPU usage
2. Stack cable problem
3. Software bug

Suggestions

When run STP on switches, configure stp edged-port on interfaces which connect to PCs and servers to avoid MAC addresses fresh frequently.

The system prompt by default is MA5680T instead of MA5600T, which means the device type is wrong.In the fact of matter, MA5680T and MA5600T are kinds of the same thing, but we clarify for customer that the MA5680T just use for GPON service, so this is the difference of commercial policy at marketing. We have ordered MA5600T but received MA5680T.There is something wrong happened at supply of production upload, which induce us got the wrong type device.

Alarm Information

Null

Handling Process

A:How we can change the device name on OLT.1. log on OLT2.Use “diagnose” and “su” mode, and Take the SCUL code (need a generated-passowrd tool, plz refer to the word in the attachment ) MA5680T#diagnoseMA5680T(diagnose)%%su Challenge:0KZQKQBT Please input password: (here need to use that tool I mentioned before)3.Change device type of OLTMA5680T(su)%%device MA5600TMA5680T(su)%%quit

Root Cause

Null

Suggestions

With regard to the details of operation, which how to change the type of OLT, plz check the document in the attachment.

Q:
In a Customer meeting customer ask about us how many RT (remote frame RSU_HABD) we can add to COT( PVMB_HABD).

Alarm Information

Failure: Resource of HW error

Handling Process

A:
We have to confirm the limitation of the Remote frame ( RSU with HABD frame) per PVM board ( Main HABD with PVMB).
We check the document,no related information found then we try to add the frame manually in the system.
We can add the 32 remote frame successfully but when adding the next frame ie.33 it prompt one error message that HW-resource error.
After doing the above operation we came to know that PVMB supports on 32 remote frames.

Root Cause

We have to confirm the value of the Remote frame ( RSU with HABD frame) we try to search the same in support site and document but it was not present there.
Then we try to check it in the system because we have to confirm the value ASAP.

Traffic belonging to multiple multicast source are forwarded to all host, even if the
”display igmp-snooping port-info” command didn’t show any entry.

Alarm Information

We don’t have to configure ”multicast-vlan enable” and ”multicast-vlan user-vlan <vlan-id>” to get multicast to an interface where customer-vlan and multicast-vlan is added simultaneously and I can’t see any groups beeing joined using ”display igmp-snooping port-info” (almost seems like igmp-snooping isn’t working).

By default switch broadcast all unknown mcast packets, this could explain why you get multicast traffic but no entry under “display igmp snooping port-info”
Unknown multicast flows are multicast data flows that match no entry in the multicast forwarding table. By default, the switch broadcasts unknown multicast flows in the corresponding VLAN. we can use the multicast drop-unknowncommand to configure the switch to discard unknown multicast flows, which reduces instant bandwidth usage compared with the broadcast mode.

A Professional Huawei optical network supplier, dedicated in the transmission field over 3 years, we are founded by ex-huawei employees, providing original huawei optical network product to global customers, in which platform we extending our value here.
For quick inquire, or pre-sale assistance, please email us at sales@thunder-link.com.