Monday, 10 September 2012

Introduction to DataSource



JDBC:- Java DataBase Connectivity is an Java programming Interface to access Data from a Java program.
The JDBC API is comprised of two packages:
- The java.sql package (the JDBC core API)
- The javax.sql package (the JDBC Standard Extension API)

JDBC provides two easy ways to connect Java Application to DB(filestore or real DataBase), lets' understand very basic part:-

(a) DriverManager:- DriverManager class is a very simple way of making an connection to DB from your Application, as its name suggest it manages the list of Drivers(provided by vendor) which can be used to make connection which is done implicitly. We will not go very deep here but a simple code extract of simple program generally used to connect if DriverManager is being used.

Class.forName("jdbc.odbc.JdbcOdbcDriver");

In above statement we are loading the Driver Class which implicitly registers the driver with the DriverManager Class. Then you are only left with getting an connection from DB. That's very simple, now when we have the Driver registered with DriverManager, use its getConnection() method to make a connection to the DB. Yes, you are right, Driver and DriverManager class have static sections where driver class registers drivers with DriverManager and DM contains the static methods too(e.g., getConnection(paratmeter1..2.3.)).

Connection con = DriverManager.getConnection(db_url,"user_name","password");

(b) DataSource:- JDBC API provides the DataSource Interface as an alternative to the DriverManager for establishing the connection with the DB. Think of DataSource as an object of real DB for making the connection. A data source represents a real-world DB. When a data source object has been registered with a JNDI naming service, an application can retrieve it from the naming service and use it to make a connection to the data source it represents. Information about the data source and how to locate it, such as its name, the server on which it resides, its port number, and so on, is stored in the form of properties on the DataSource object. This makes an application more portable because it does not need to hardcode a driver name, which often includes the name of a particular vendor. It also helps in maintaining the code easily, for example, the data source is moved to a different server, all that needs to be done is to update the relevant property in the data source file. None of the code using that data source needs to be touched. Once a data source has been registered with an application servers JNDI name space, application programmers can use it to make a connection to the data source it represents.

DataSource came up with major new implementations:-

(i) Portability by using JNDI:- Datasource properties(DB url, db hostname, port) aren't required to be hardcoded into the code. Properties of a DataSource can be kept separately. Any changes to the data source or database drivers are made in the configuration file. In case of a DriverManager, these properties are hard coded in the application and for any changes we must recompile the code.
Creating DataSource involves registering the DS name with JNDI(Java Naming and Directory Interface) and a unique name is bound to it. Lets name it 'myds'. From application side we have up to lookup for jndi and create an object out of it. It can be done as mentioned below:-

Lets go through simple example now-

InitialContext ic = new InitialContext();
DataSource ds = (DataSource)ic.lookup("jdbc/myds");
Connection con = ds.getConnection();

InitialContext(javax.naming.InitialContext) is the class that implements the context Interface and extends the object class. It provides the starting point for resolutions of names that are binded to objects. Now suppose we are using WebLogic/WebSphere as middleware technology and our DS is registered with naming service they provide, then following code will help our InitialContext object to fetch the starting point.

For WebSphere:-

Hashtable env = new Hashtable();
env.put(context.INITIAL_CONTEXT_FACTORY, "com.ibm.Webphere.naming.WsnInitialContextFactory");
InitialContext ic = new InitialContext(env);
DataSource ds = (DataSource)ic.lookup("jdbc/myds");
Connection con = ds.getConnection();

For Weblogic:-

Hashtable env = new Hashtable();
env.put(context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContext");
InitialContext ic = new InitialContext(env);
DataSource ds = (DataSource)ic.lookup("jdbc/myds");
Connection con = ds.getConnection();


Hashtable is well suited here as it stores the objects as key-value pair and is thread safe.

(ii) Connection Pooling :-
Connection pool is the pool of the real time database objects ready to make connection. Its advantages, as usually no developer wants his code to make new connection every time. This helps in decreasing the response time of the application.

Not only this, Connection pool comes with several detailed configuration using which one can enhance the way the application responds. For example, maximum number of connections to be available in the pool, reap time, purging of stale connections, recycle the connections and identifying old objects in the pool. All these configuration and settings are based on different Application server.

The connection will usually be a pooled connection. In other words, once the application closes the connection, the connection is returned to a connection pool, rather than being destroyed.

And if you come across term JDBC provider(usually in WebSphere), then by configuring it we are providing information about the set of classes used to implement the data source and the database driver. We are providing the environment settings for the DataSource object.

The programming model for accessing a data source is as follows:

1. An application retrieves a DataSource object from the JNDI naming space.

2. After the DataSource object is obtained, the application code calls getConnection() on the data source to get a Connection object. The connection is obtained from a pool of connections.

3. Once the connection is acquired, the application sends SQL queries or updates to the database.

(iii) Distributed Transaction:-

Provide interface for your application to have smart way to ensure your distributed/global transactions.

XA and Non-XA Transaction:-

Non-XA Transaction can be referred to as single transaction that involves least resources for transaction to complete, example can be if you are adding an name to single table of single DB. Whereas XA Transaction is more of an global transaction, which involves distributed commit of data(involves more than one database and other resources).

XA transaction is referred to as Two Phase Commit, which ensures once the transaction completes the data should be in sync with the all resources either after committing or rolling back the change. The two phases are prepare and commit.
In phase 1, the Transaction Manager instructs the resource Manager to prepare for the commit, to ensure that each resource can commit.
Phase 2, depends upon phase 1, if Transaction finds no issue with any resource to commit the change, proceeds with commit in this phase else proceeds with rollback.


XA Transaction involves Transaction Manager's coordination which is a service running in your Application Server(JBoss, WebSphere, WebLogic and many more). Non-XA transaction does not involve transaction manager, if your app have both single and distributed transactions then using XA DataSource will be enough. Smart XA DataSource will take care of the situation and will use 2 Phase commit only when required.


Saturday, 12 May 2012

Garbage collection

Garbage collection !!


This is the most critical as well as the most mysterious issue. Having Nikhil one of the experts on this topic as co-writer of this blog, I am going to take a look at Garbage collection, its strategy and its implications.
Before I start with the memory allocation discussion, I would like to know about the different memory types.
Any program/process has its own memory structure allocated. The memory structure will be divided in below types:
  • Code Segment
  • Data Segment
  • Stack
  • Heap


    1. Code Segment :
Code segment contains the compiled code of the program. Whenever a program executes, the instructions are loaded in memory and executed. This can be verified by taking a look at the .exe file you are trying to execute or the .class file created of your java program. It shows some instructions in binary format. The code segment contains that data.

         2.  Data Segment :
This area stores the static/global variables.

The next two sections that we are going to discuss are used to store data. Stack and Heap. The stack is a place in the computer memory where all the variables that are declared and initialized before runtime are stored. The heap is the section of computer memory where all the variables created or initialized at runtime are stored.

        3.  Stack :
The stack is the section of memory that is allocated for automatic variables within functions.
Data is stored in stack using the Last In First Out (LIFO) method. This means that storage in the memory is allocated and de-allocated at only one end of the memory called the top of the stack. Thus we can remove the data item that is most recently stored.

         4. Heap :
The heap is the area used to allocate memory at the runtime [Dynamic Memory Allocation]. Unlike stack, the data item is allocated/de-allocated arbitrarily depending on the program’s need. The pattern of allocation and size of blocks is not known until run time.

Today, almost all the programming languages use dynamic memory allocation. This allows object to be allocated/deallocated even if their total size was not known at the time that the program was compiled and its life time too. A dynamically allocated object is stored on a heap, rather than on the stack or statically (data segment). Heap allows to -
a.       Choose dynamically the size of new objects
b.      Define and use recursive data structures such as lists, trees and maps
c.       Return newly created objects to the parent procedure
d.      Return a function as the result of another function
Heap allocated objects are accessed through references. Typically, a reference is a pointer to the object.

Why do I need Automatic Memory Management?

This is the question that comes in mind when we think about our conventional ways of handling memory allocation and automatic memory management. I have written this code and I am completely aware of what is going on into the code, then what is the need to rely on automatic memory management? Let’s take a look at the possible issues with Explicit memory allocation.
       If I need to handle the memory management, I can use operators like free or delete to free the memory. Still there are some issues that can arise like dangling pointers. Sometimes the memory can be freed prematurely even if it is being referenced. This causes the program to go unstable. If the program uses the pointer reference to go further with the execution, the results are unpredictable. If you are lucky, the program will crash immediately and then the debugging can begin. In other cases, the program can lead further with the execution and provide you the incorrect result. This makes the situation worst as you are not exactly aware of what’s happening behind the scene.
The next issue is that the developer fails to free the memory- causing Memory Leak. For the small applications, this loss can be benign but in case of large applications, memory leak can show its significant appearance and effect. The Memory leak can further lead to memory management issues for allocating the memory and further may cause your application to go Out Of Memory.
       This issue further becomes more serious when we talk about multithreaded applications or multiprocessor machines. What if the particular object is being shared by two different processes? Or even in worst case, the object is being shared by multiple threads. Now there is the need for more advanced algorithms to make the objects thread-safe. This further brings me to the quote-“Liveness is the global property.” But the way we deal with freeing the memory is local one. Underlines the need of Automatic Memory Management.


Automatic Memory Management:

          The issues stated above can be addressed by the Garbage Collection. The garbage collector reclaims memory only if the object does not hold any references. Thus, Dangling Pointer issue is down now. Garbage collector traces down the objects and identifies the unreachable objects (the ones which are not being referenced - Garbage) and then are reclaimed. In case of Automatic memory management, there will be only one collector running to reclaim the memory, thus making the memory management thread-safe.
Garbage collection marks every object currently not being referenced and frees the object. Thus reducing the possibility of Memory Leak. Garbage Collector does not guarantee that Memory Leak will not happen but still ensures that it brings down its possibility. Every data structure that is not being referenced is released. Also if it has children associated then they are also reclaimed. However it cannot do anything if a particular data structure is going endlessly. Memory management cannot be the issue of only Garbage Collector, but it is also responsibility of Software Engineer.

               
Garbage Collection Parameters:
      Going forward we will be taking a look at few collectors, any strategy/algorithm for garbage collections has its own tradeoffs. We will list down and take a look at each performance parameter for Garbage Collector.
Often, we are not able to find the perfect algorithm for Garbage Collection. Every algorithm comes with its pros and cons. Thus we need to trade-off on some parameters depending on our requirements. It has been observed that every algorithm has its own standpoints and is 15% faster than others on its few designated parameters.
   
     1. Safety :
Most important parameter for garbage collection. Garbage Collection should be safe. The garbage collector must never reclaim the live objects. However this imposes a performance cost on the implementation.

        2. Throughput :
Every user wants his program to run faster. Its one of the aspect concerned with garbage collection is that the garbage collection should take very less time. This is the ratio of marking/sweeping time. However the user is more interested to spend a very less time for memory management task. Thus the complete process of memory allocation/de-allocation should take less time. It is observed that the memory allocation takes the more time as compared to collection. Thus there are few approaches to reduce the memory allocation time. The approaches like mark-sweep-compact, compacts the memory after collecting the objects to reduce the fragmentation.

     3. Completeness and promptness :
Looking at garbage collections ideally from the completeness perspective, the garbage collection should collect all garbage in the memory, though it is not always possible or desirable. However it is never advised to collect the complete heap from performance perspective. Thus the heap can be divided in generations and then collect each generation.
        Taking  a look further, the object which turns garbage after a garbage collection is started is collected in next cycle. This object is called as floating garbage. Thus completeness cannot be the only property to look.

     4. Pause Time :
           It is always desired to have less interference of garbage collector to the program execution. Many garbage collectors introduce a pause time in the program execution as they stop the memory operation during collection. It is always expected to have very less pause time. Thus one of the approaches is to have generational garbage collector where the heap is divided in generations. The more frequent and quick collection takes place on the nursery generation and then very few collections on old generation. This allows reducing the pause time. Still you need to deal with the sizes of nursery and old generation to make sure that we achieve min pause time.
          Parallel garbage collectors stop the world to perform collection but reduce the pause time by employing multiple threads. Concurrent and incremental collectors aim to reduce pause times still further by occasionally performing a small quantum of collection work interleaved or in parallel with memory allocations.


We will take a look at other Garbage Collection algorithms/policies in forthcoming post.

Friday, 30 March 2012

Application Packaging Structure

In a standard J2EE application, modules are packaged depending on their functionality. It can be EAR, WAR or JAR as well. Today, we will have a little discussion on each module and its structure.

A J2EE application consists of one or more J2EE modules and one J2EE application deployment descriptor. An application deployment descriptor contains a list of the applications's modules and information on how to customize the application. A J2EE application consists of one or more Java Archive (JAR) files along with zero or more Resource Archive (RAR) files packaged into an Enterprise ARchive (EAR) file with an .ear extension.

A J2EE module consists of one or more J2EE components for the same container type and one component deployment descriptor of that type. A component deployment descriptor contains declarative data to customize the components in the module. A J2EE module without an application deployment descriptor can be deployed as a stand-alone J2EE module. Types of J2EE modules are as follows:

  • Web Application Archive (WAR): A web application is a collection of servlets, HTML pages, classes, and other resources that can be bundled and deployed to several J2EE application servers. A WAR file can consist of the following items: servlets, JSPs, JSP tag libraries, utility classes, static pages, client-side applets, beans, bean classes, and deployment descriptors (web.xml and optionally sun-web.xml).
  • EJB JAR File: The EJB JAR file is the standard format for assembling enterprise beans. This file contains the bean classes (home, remote, local, and implementation), all of the utility classes, and the deployment descriptors (ejb-jar.xml and sun-ejb-jar.xml). If the EJB component is an entity bean with container managed persistence, a .dbschema file and a CMP mapping descriptor, sun-cmp-mapping.xml, must be included as well.
  • Application Client Container JAR File: An ACC client is a Sun Java System Application Server specific type of J2EE client. An ACC client supports the standard J2EE Application Client specifications, and in addition, supports direct access to the Sun Java System Application Server. Its deployment descriptors are application-client.xml and sun-application-client.xml.
  • Resource RAR File: RAR files apply to J2EE CA connectors. A connector module is like a device driver. It is a portable way of allowing EJB components to access a foreign enterprise system. Each Sun Java System Application Server connector has a J2EE XML file, ra.xml.

Image 1

Lets take a look at them one by one. --

JAR - Java ARchive

It is a file format based on the ZIP file format and similar to ZIP format this is also used for aggregating many files (many be of different types) into one aggregate file. This aggregate file will have (.jar) extension. Application development using Java uses JAR files for many useful purposes. One of the most common use is to package all the .class files, image files, and other files required by an Applet into a JAR file, so that the download of a single file will have all the components downloaded at the client machine. Otherwise, we would require those many HTTP Connections to individually download each of the components and this will of course be a very tedious and time-consuming effort. Another popular usage is to bundle all the .class files, and other required component files of a typical subsystem into a JAR file and include that JAR file into the CLASSPATH of another Java application which requires the services of that subsystem. The maintenance and deployment becomes very easier in this case.

This archive file format is platform-independent format which has been fully written in Java. This format is capable of handling audio and image files in addition to the .class files. It's an open standard and fully extendable. JAR file consist of a ZIP archive and an optional Manifest file, which contains package and extension related data. The Manifest file will have the name '
MANIFEST.MF'. This manifest file belongs to the optional directtory named 'META-INF'. This directory is used to store package and extension configuration data, security related data, versioning related data, services related data, etc.

EJB container hosts enterprise java beans based on EJB API designed to provide extended business functionality such as declarative transactions, declarative method level security and multi-protocol support - more of RPC style of distributed computing. EJB container required EJB module to be packaged in JAR file having ejb-jar.xml file in META-INF folder.


WAR - Web ARchive

As the name suggests this file format is used to package all the components of a Web Application. It may contain JARs, JSPs, Servlets, HTMLs, GIFs, etc. The purpose of this archive format is same as that of the JAR - to make the deployment, shipping, and in turn the maintenance process easier. This will have an XML file named
web.xml as the Deployment Descriptor. This Deployment Descriptor is used by the Web Container to deploy the web application correctly.

EAR - Enterprise ARchive

An enterprise application may be composed of several Web Applications and other independent JARs. This archive file format is used in those cases to bundle all the components of an enterprise application into a single file. Again the purpose is same - making deployment, shipping, and hence the maintenance easier. Since, an enterprise application may have several Web Applications as its components, hence an EAR file may contain WARs and JARs. An EAR also contains an XML-based Deployement Descriptor, which is used by the Application Server to deploy the enterprise application correctly.

Below is the model of a typical EAR.

Image 2

Thursday, 29 March 2012

How to determine the memory consumption by a java process

- If you are on Linux :


 you can try to grep the java process by:

====
[nikhil@nikhil bin]$ ps auxwww |grep java
nikhil    22412     1 18 10:43 pts/4    00:00:02 /home/nikhil/jdk1.6.0_31/bin/java -Djava.util.logging.config.file=/home/nikhil/Studies/apache-tomcat-7.0.26/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/home/nikhil/Studies/apache-tomcat-7.0.26/endorsed -classpath /home/nikhil/Studies/apache-tomcat-7.0.26/bin/bootstrap.jar:/home/nikhil/Studies/apache-tomcat-7.0.26/bin/tomcat-juli.jar -Dcatalina.base=/home/nikhil/Studies/apache-tomcat-7.0.26 -Dcatalina.home=/home/nikhil/Studies/apache-tomcat-7.0.26 -Djava.io.tmpdir=/home/nikhil/Studies/apache-tomcat-7.0.26/temp org.apache.catalina.startup.Bootstrap start
====


- Navigate to jdk bin dir or setup the the system classpath like below :

export CLASSPATH=/home/nikhil/jdk1.6.0_31/bin/:$CLASSPATH:.:

- execute jmap command :

for heap   jmap -heap <process id>, you will get output like below :

====
[nikhil@nikhil bin]$ jmap -heap 22412
Attaching to process ID 22412, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 20.6-b01

using thread-local object allocation.
Parallel GC with 4 thread(s)

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 2034237440 (1940.0MB)
   NewSize          = 1310720 (1.25MB)
   MaxNewSize       = 17592186044415 MB
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 31850496 (30.375MB)
   used     = 28962800 (27.621078491210938MB)
   free     = 2887696 (2.7539215087890625MB)
   90.93359174061214% used
From Space:
   capacity = 5242880 (5.0MB)
   used     = 0 (0.0MB)
   free     = 5242880 (5.0MB)
   0.0% used
To Space:
   capacity = 5242880 (5.0MB)
   used     = 0 (0.0MB)
   free     = 5242880 (5.0MB)
   0.0% used
PS Old Generation
   capacity = 84738048 (80.8125MB)
   used     = 1802824 (1.7193069458007812MB)
   free     = 82935224 (79.09319305419922MB)
   2.1275259963505415% used
PS Perm Generation
   capacity = 21757952 (20.75MB)
   used     = 15853608 (15.119178771972656MB)
   free     = 5904344 (5.630821228027344MB)
   72.86351215408509% used
====

The above output shows the memory usage by java process, let's go little deep with the details:


Jmap is a utility that connects to the JVM on default jmx port i.e:1099 by default and may vary from vendor to vendor.

1.
==
using thread-local object allocation.
Parallel GC with 4 thread(s)
==

suggests that there are 4 GC threads running with parallel GC algoritham.

2.
==
Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 2034237440 (1940.0MB)
   NewSize          = 1310720 (1.25MB)
   MaxNewSize       = 17592186044415 MB
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
==

suggests the heap config and sizes of different chunk. To get details of these check my article JVM memory configuration.

3.
==
Heap Usage:
PS Young Generation
Eden Space:
   capacity = 31850496 (30.375MB)
   used     = 28962800 (27.621078491210938MB)
   free     = 2887696 (2.7539215087890625MB)
   90.93359174061214% used
From Space:
   capacity = 5242880 (5.0MB)
   used     = 0 (0.0MB)
   free     = 5242880 (5.0MB)
   0.0% used
To Space:
   capacity = 5242880 (5.0MB)
   used     = 0 (0.0MB)
   free     = 5242880 (5.0MB)
   0.0% used
PS Old Generation
   capacity = 84738048 (80.8125MB)
   used     = 1802824 (1.7193069458007812MB)
   free     = 82935224 (79.09319305419922MB)
   2.1275259963505415% used
PS Perm Generation
   capacity = 21757952 (20.75MB)
   used     = 15853608 (15.119178771972656MB)
   free     = 5904344 (5.630821228027344MB)
   72.86351215408509% used
==

Above suggests the memory usage by different chunks of the process heap.

Note : Process heap is process size not the java heap.


Now if you want to dump the info of the objects, classes and references residing over heap, you can take a heap dump by following :

jmap -J-d64 -heap:format=b <proces id>

above is for 64 bit jvms, if you have a 32 bit java, try below:

jmap -heap:format=b <process id>

here format=b suggests that ,it is in binary form. You can also produce an ASCII one, but most of the time it is of no use, because majority of heap dump analyzers support binary or hprof format.


OK, now I want to know about my classloader instances. here we go :

jmap -permstat <process id>

======
[nikhil@nikhil bin]$ jmap -permstat 22412
Attaching to process ID 22412, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 20.6-b01
7387 intern Strings occupying 666440 bytes.
finding class loader instances ..Finding object size using Printezis bits and skipping over...
done.
computing per loader stat ..done.
please wait.. computing liveness............................................liveness analysis may be inaccurate ...
class_loader classes bytes parent_loader alive? type

<bootstrap> 1424 8722704   null   live <internal>
0x00000000d9f63548 0 0 0x0000000086c001c0 live org/apache/catalina/loader/WebappClassLoader@0x000000008276adf8
0x00000000d9f1efa8 0 0 0x0000000086c001c0 live org/apache/catalina/loader/WebappClassLoader@0x000000008276adf8
0x0000000086c47000 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086d17fa0 0 0 0x0000000086c00230 dead java/util/ResourceBundle$RBClassLoader@0x00000000821f4cb0
0x00000000d9e16c48 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46e18 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46de0 1 3088 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16c80 1 3144 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46e50 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46da8 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9d8a470 0 0 0x0000000086c001c0 live org/apache/catalina/loader/WebappClassLoader@0x000000008276adf8
0x0000000086c00ad0 12 78648   null   live sun/misc/Launcher$ExtClassLoader@0x0000000081bd0df0
0x00000000d9e16bb0 1 3144 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16ab8 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46ec0 1 3088 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c47098 1 3088 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16be8 1 3088   null   dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c47140 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16948 17 100592 0x0000000086c001c0 live org/apache/catalina/loader/WebappClassLoader@0x000000008276adf8
0x00000000d9e16b28 1 3088   null   dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c47108 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16af0 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46e88 1 3128 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46d70 1 3088 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c470d0 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46f68 1 3128 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46fa0 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c47060 1 3128   null   dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c471b0 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16a80 1 3088 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c46ef8 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9f30118 0 0 0x0000000086c001c0 live org/apache/catalina/loader/WebappClassLoader@0x000000008276adf8
0x0000000086c47178 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x00000000d9e16a48 1 3088   null   dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c471e8 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c00230 97 931728 0x0000000086c00ad0 live sun/misc/Launcher$AppClassLoader@0x0000000081c3f970
0x0000000086c46f30 1 3120 0x0000000086c001c0 dead sun/reflect/DelegatingClassLoader@0x0000000081a675e8
0x0000000086c001c0 699 5813752 0x0000000086c00230 live org/apache/catalina/loader/StandardClassLoader@0x0000000081d63dd0

total = 39 2278 15737720     N/A     alive=9, dead=30     N/A
======


OK Baj, I just want to see the objects that are alive on my heap :

jmap -histo <process id>

I am not going to paste the output here, because it would be very large  :) :P


Note :  jconsole is also a powerful tool from which you can determine the jvm runtime details. See the page
jconsole




If you are on Windows : get the process id from Task manger that appears on screen after pressing ctrl+alt+delete, by default it doesn't show the pid, however you can custmize this from the view settings, rest steps would be the same.