The Solaris group is a forum where peers share technical expertise, solve problems, and discuss issues related to the Solaris operating system, including OS-related malfunctions, security issues, and network performance.

Thanks for reply.
This is normal DAS box without having any controller. And we have already updated all drivers to the recommended version for this hardware / os . Even if it is filesystem problem then it should go in maintenance mode. Which is not happening. What can be the other possibilities which are triggering this panic issue?

That is correct. Fault manager will not log anything if it is unable to
detect it, such as when the server just panics.
What hardware is this?

I would try to fiddle a little with mdb http://kristof.willen.be/node/1100
I have the feeling nfs is involved.
Does this server shares any nfs? Is it on zfs? What disks do it have?
More importantly, when was the first occurrence of this panic? Did it work
ok before? Any change you can tell?
Run the vmcore and the unix though mdb and please post your findings.

Hi-
You said you have updated system using recommended patch cluster. That tells me you have support contract with Oracle. At least the software support contract?? In that case, you should contact oracle for faster resolution. Send them explorer o/p and crash dumps and you will be in a better position to get a resolution in your case.
HTH

It is clearly visible that you have driver conflict issue. Please find the module assigned to this specific hardware(pci14e4,1647) and edit the /etc/systems file to stop this module to be loaded while booting.

Hi Friend,
Is it possible for you to boot in single user mode then boot after
booting remove the /etc/path_to_inst
file and make a reconfiguration boot (boot -r)? If you have old
/etc/ driver_aliases file and restore it.

* moddir:
*
* Set the search path for modules. This has a format similar to the
* csh path variable. If the module isn't found in the first directory
* it tries the second and so on. The default is /kernel /usr/kernel
*
* Example:
* moddir: /kernel /usr/kernel /other/modules

* root device and root filesystem configuration:
*
* The following may be used to override the defaults provided by
* the boot program:
*
* rootfs: Set the filesystem type of the root.
*
* rootdev: Set the root device. This should be a fully
* expanded physical pathname. The default is the
* physical pathname of the device where the boot
* program resides. The physical pathname is
* highly platform and configuration dependent.
*
* Example:
* rootfs:ufs
* rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
*
* (Swap device configuration should be specified in /etc/vfstab.)

* exclude:
*
* Modules appearing in the moddir path which are NOT to be loaded,
* even if referenced. Note that `exclude' accepts either a module name,
* or a filename which includes the directory.
*
* Examples:
* exclude: win
* exclude: sys/shmsys

* forceload:
*
* Cause these modules to be loaded at boot time, (just before mounting
* the root filesystem) rather than at first reference. Note that
* forceload expects a filename which includes the directory. Also
* note that loading a module does not necessarily imply that it will
* be installed.
*
* Example:
* forceload: drv/foo

* set:
*
* Set an integer variable in the kernel or a module to a new value.
* This facility should be used with caution. See system(4).
*
* Examples:
*
* To set variables in 'unix':
*
* set nautopush=32
* set maxusers=40
*
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13

The error is related to pci14e4,1647 card (Network card). It's corresponding driver details can be found by modinfo command or just by viewing /etc/path_to_install. Once you come to know that this specific module is attached with pci14e4,1647 . You can exclude its module by editing the following file : /etc/sysystems
exclude:modulename.

2. Reboot the system by using the
3. Run fmadm faulty to get the faulty/degraded device pci ids by entering #fmadm faulty .
4. Use the fmadm command #fmadm repair <faulty_device_ids> and repair the following two PCI IDs that may be displayed:

5. Reboot the system by using the
Servers Configured with NC325m PCI Express Quad Port Gigabit Server Adapter
When using the NC325m PCI Express Quad Port Gigabit Server Adapter in the configuration detailed above, perform the following after installing the BRCMbcme Version 12.2 (or earlier) driver:
1. Remove any entries that end with bge from the /etc/path_to_inst file:

2. Reboot the system by using the
3. Run fmadm faulty to get the faulty/degraded device pci ids by entering #fmadm faulty .
4. Use the fmadm command #fmadm repair <faulty_device_ids> and repair the following four PCI IDs that may be displayed:

prtconf -D will exactly let you know that which driver (Module) is currently being used for your pci14e4,1645 device. You can unload it just by editing the /etc/systems file as described in my earlier post.

Copyright 1998-2015 Ziff Davis, LLC (Toolbox.com). All rights reserved. All product names are trademarks of their respective companies. Toolbox.com is not
affiliated with or endorsed by any company listed at this site.