Divya Archive

Remote Administration Of Domain Through WLST

For administration of the whole domain and all the servers in the domain, we can use WLST commands. This is the best way for a monitoring and managing the complete domain using nodemanager.

To start the admin server using WLST, ensure that the admin server is associated to a machine. Once done, start the WLST tool.

Start the nodemanager using the command startNodeManager() as seen in the image.

Note: Nodemanager starts and runs irrespective of the WLST tool. Even if you exit from WLST, the nodemanager will still be running.

Now you need to connect to the nodemanager using the nmConnect() command. This needs to be done because you need to start the the servers through node manager.

Example:  nmConnect(‘weblogic’, ‘weblogic’, ‘localhost’, ‘5556’, ‘example_domain’,’D:/BEA103/user_projects/domains/example_domain’,’plain’)
– The above command takes username, password, hostname, port, domain name, domain directory and nodemanager type (SSL, Plain) as arguments.

1) To start the admin server through WLST using nodemanager, type the below command:
– nmStart(‘AdminServer’)

2) To check the status of the admin/managed servers, type the command:
– nmServerStatus(‘AdminServer’)
– nmServerStatus(‘MS1’)

3) To stop/kill the server, enter the command:
– nmKill(‘AdminServer’)

Steps to start the managed servers remotely:

– On the remote machine, start the WLST tool and connect to the local running admin server:

Once connected, do the nmEnroll from the remote machine:

Now to start the managed servers from the remote machine, connect to the nodemanager using nmConnect as above and do the nmStart on the managed servers.

The servers status can be checked from the remote machine.

If you have any queries, do get back to us.

Best Regards,
Wonders Team!

Different Out Of Memory Issues

* Exception in thread “CompilerThread1” java.lang.OutOfMemoryError: requested 793020 bytes for Chunk::new. Out of swap space?

Out of memory while reading in symbol table of /apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl
( 0)  0xc8461230     [/apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl]
( 1)  0xc80a5fec     [/apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl]
( 2)  0xc7f00420     [/apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl]
( 3)  0xc7f00ca0     [/apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl]
( 4)  0xc8368d08     [/apps/bea/weblogic92/jdk1.5.0.18/opt/java1.5/jre/lib/PA_RISC2.0/server/libjvm.sl]
( 5)  0xc005b2e4   __pthread_body + 0x44  [/usr/lib/libpthread.1]
( 6)  0xc0065574   __pthread_start + 0x14  [/usr/lib/libpthread.1]
Java out of memory messages are marked with pid: 13828 in /var/adm/syslog/syslog.log.

Possible causes:
– not enough swap space left, or
– kernel parameter MAXDSIZ is very small.

Although it appears that an OutOfMemoryError is thrown this apparent exception is reported by the HotSpot VM code when an allocation from the native heap failed and the native heap may be close to exhaustion. The message indicates the size (in bytes) of the request that failed and also indicates what the memory is required for. In some cases the reason will be shown but in most cases the reason will be the name of a source module reporting the allocation failure. If an OutOfMemoryError with this error is thrown it may require using utilities on the operating system to diagnose the issue further. Examples of issues that may not be related to the application are when the operating system is configured with insufficient swap space, or when there is another process on the system that is consuming all memory resources. If neither of these issues is the cause then it is possible that the application is failed due to native leak; for example, application or library code is continuously allocating memory but is not releasing it to the operating system.
For more information: http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf

The recommendation for swap space size in the Solaris is that swap should be configured about 30% of physical RAM.

The following link has suggested a workaround to add ‘-XX:+UseDefaultStackSize -Xss256K’ parameter.

Add: -XX:CodeCacheMinimumFreeSpace=2M -XX:+ReservedCodeCacheSize=64M -XX:PermSize=128m -XX:MaxPermSize=384m (As per your other JVM settings)

Sun team needs to be involved for this issue. Usually such issues are solved by sun support.


* java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:574)
at weblogic.work.RequestManager.createThreadAndExecute(RequestManager.java:271)
at weblogic.work.RequestManager.executeIt(RequestManager.java:245)
at weblogic.work.ServerWorkManagerImpl.schedule(ServerWorkManagerImpl.java:142)

1) Set kernel parameter maxdsiz to a higher value
2) Reduce the current heap size.
3) Check the kernel values:  ulimit -a
4) If the NPROC soft limit is lower than the hard limit, increase it as needed: ulimit -u <new value>.
Check the Operating System documentation to make changes permanent at the OS configuration files.
5) Need to reduce the JVM stack size and the OS stack size both. Set the -xss in java options and set ulimit -s on the OS level.


* java.lang.OutOfMemoryError: PermGen space
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Unknown Source)
at java.lang.Class.getConstructor0(Unknown Source)
at java.lang.Class.getConstructor(Unknown Source)

Solution: Increase the max permgen space -XX:MaxPermSize=256m

There can be a leak in the permgen objects. If tuning parameters do not resolve the issue, we need to use the memory leak detector tools and find out which instances in the permgen space are not getting cleared.


* java.lang.OutOfMemoryError: allocLargeObjectOrArray – Object size: 372032, Num elements: 372012
* java.lang.outofmemoryerror: nativeGetNewTLA

Solution: -XXtlasize:128k -XXlargeobjectlimit:128k

If this does not solve the issue, we need to check in the application code for the large objects being created and not being destroyed. Take JRA Recording (Oracle JRockit) or use JConsole and memory leak detector tools (JMAP, JHAT) for analysis on the


* java.lang.OutOfMemoryError: Java Heap Space

Solution: First thing that needs to be checked is the gc logs. Need to check whether the garbage collection is happening properly. If the heap keeps gradually increasing even after full gc, tune the gc algorithms and check if the behavior is the same. Use memory leak detector tools for both sun jdk and JRockit to check which instances from the application are not getting destroyed.

If this is not the case and the application genuinely needs more memory, increase the heap size by using the parameters:
Example: -Xms2048m -Xmx2048m


* java.lang.StackOverflowError

Solution: Stack over flow error is usually generated due to a recursive call made by the application (infinite recursion), or its because of an attempt to allocate more memory on the stack than will fit. This is usually the result of creating local array variables that are far too large for the current stack.

For the first possibility, we need to check the application code as to where is the recursive call being made.
For the second possibility, we can increase the JVM stack size by the parameter : Example: -Xss512K


java.lang.OutOfMemoryError: GC overhead limit exceeded.
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3209) 

Try adding this jvm option


Comparison Between Cluster Multicast Messaging and Unicast Messaging Mode

When servers are in a cluster, these member servers communicate with each other by sending heartbeats and indicating that they are alive. For this communication between the servers, either unicast or multicast messaging is used. This is chosen from the admin console in Cluster -> Configuration -> Messaging -> Messaging Mode.

To use multicast messaging, hardware configuration and support for multicast packets is required. Unicast does not have this requirement, which is why using unicast in latest versions is recommended.

When multicast messaging is used, it is a one-to-many communication, every server sends the notification/heartbeat/multicast packet to each other. This causes a heavy load on the application’s multicast buffer, so if the buffer is full, new multicast messages cannot be written to the buffer and the application is not notified when messages are dropped. So there is a possibility that the server instances miss the messages. This might lead in the cluster throwing out the server instances out of the cluster.

Unicast configuration is much easier because it does not require cross network configuration that multicast requires. Additionally, it reduces potential network errors that can occur from multicast address conflicts.

Mode of Communication:
Multicast – A multicast address and multicast port is used for listening to the messages.

Unicast – A network channel is used for communication between the servers. If no channel is specified, default network channel is used.


Method of communication between servers:
Multicast – Each server communicates with every member server in the cluster. Which means heartbeats are sent to every server.

Unicast – For the member servers in the cluster, group leaders are chosen and only those group leaders communicate with the servers among the group and these leaders notify each other about the availability of all the other servers.
For example : Suppose there are 6 servers in the cluster. 2 groups are made and there are 2 group leaders. Other 2 servers of the group will notify their leader that they are alive and the group leader will send this information to the leader of other group.

The frequency of communication in unicast mode is similar to the frequency of sending messages on multicast port.


For new server versions, using unicast is recommended because it is a simplified communication mode. But for backward compatibility with the previous versions, you will need to use multicast if there is a communication requirement between clusters of versions prior to WLS 10.0.

Whenever the messaging mode of the cluster is changed, all the servers in the cluster need a restart because the changes are not dynamic.

If you have any questions regarding unicast and multicast messaging mode, please let us know about it.

Best Regards,
Wonders Team!