May 23, 2005

Approach to developing Resource Adapters using J2CA

One of my friend, Karthik had asked me to write something about J2EE Connector Architecture (JCA). I have a year or so experience in JCA, was part of team developing RDBMS Adapter for Weblogic Platform.
The below section is part of my write-up given to my boss on JCA. The first part is on Overview of JCA, which I did not think was that important/cruicial to post it here. You can find lot of primers on J2CA. The seeker seeks it.
The second part of the write-up is bascially the title of this blog, which is based on my limited experience and knowledge.
Here it goes...

Approach to developing Resource Adapters using J2CA

JCA is an interface definition. There are no specific guidelines or approaches/reference implementations provided by SUN in creating resource adapters.

The approaches broadly fall under two categories:

  • Non-Framework based approach
  • Framework based approach

Before we decide on one of the above approaches, it is recommended to have a technology proof of concept (POC) implementing a basic adapter to connect to each type of EIS to mitigate any technical risks as early as possible.This is important as we are integrating J2EE systems with disparate Legacy /EIS systems. This technology POC single out & eliminate main elements of risk. The goal can be to implement a single requirement be it a connection related or transaction related using the technology that we would like to prove

The below outline of development approach for building resource adapters apply

  • Research EIS requirements

Identify the required EIS and appropriate services that need to be exposed
Identify the expensive Connection object
Implement a POC
Identity security needs
Identity type of transaction (XA-Txn, Local Txn or No Txn)

  • Development environment configuration

Set up file/directory structureSet up build proces

  • Implement Service provider interface

Implement the interfaces that comprise the SPI, at least the following three interfaces:
ManagedConnectionFactory, which supports connection pooling by providing methods for matching and creating a ManagedConnection instance.
ManagedConnection, which represents a physical connection to the underlying EIS
ManagedConnectionMetaData, which provides information about the underlying EIS instance associated with a ManagedConnection instance

  • Implement Client Connection Interface (optional)

Implement the interfaces that comprise the CCI including the Connection interface and the Interaction interface

  • Package the Adapter

Package the adapter into rar file.
Deploy the Adapter on the target application server platform

  • Test Adapter

Test the adapter for functionality & performance on desired platforms/EIS versions

  • Release the adapter

Non-Framework based approach

This approach is primarily useful if we are implementing only one of type of Resource adapter. Here everything needs to implemented from scratch i.e. all the outlined steps above are executed. If an adapter factory is envisaged, the framework based approach is recommended.

Framework based approach

When there is a need to create resource adapters for more than one EIS, development effort can be greatly reduced by defining a generic adapter framework. This is because the common features applicable for all adapters can be built into the framework.
The framework can also implement some default behavior which can be extended as needed by the developer. For example, the JCA specification requires that the InteractionSpec implementation class provide getter and setter methods that follow the JavaBeans design pattern. To support the JavaBeans design pattern, support for PropertyChangeListeners and VetoableChangeListeners in required in the implementation class. The framework can take care of this and other low level details allowing the adapter developer to focus on implementing the EIS-specific details of the adapter.

The common features for the resource adapters that can be implemented by the framework include but are not limited to

  • Basic support for internationalization and localization of exception and log messages for an adapter
  • Logging tool kit support
    -that allows you to log localized messages to multiple output destinations.
  • Getter and setter methods for standard connection properties (username, password, server, connectionURL, and port) as the connection properties are common to all the EIS
  • License checking facility (applicable for EIS vendors who market the adapters)
  • Default Connection event listeners for logging connection-related events
  • Simplifying the clean-up and destruction of Connections
    -destroying Connection instances when a connection-related error occurs
  • Exception handling for
    -providing a generic exception handling
  • Base implementation of the primary Interfaces for abstract implementation of the interfaces

Instead of investing lot of effort and time in planning/designing a custom framework from scratch, there are many frameworks available from popular application server vendors which can be used right away. For e.g. BEA has Adapter Development Toolkit, IBM has its own Connector framework; SUN has Sun ONE Connector Builder etc. These are best used for their respective application servers as they are designed for those and there can be difficulty in porting to other application servers. But definitely a lot can be benefited by studying these frameworks’ documentation which can aid in building your own custom framework.

In conclusion, the approach depends on factors such as business drivers, time to market, complexity, and available developer skill set etc. Having a framework in place adds structure and consistency to the process and product, reducing recurring development/enhancements and maintenance costs.

May 16, 2005

Classic Mistakes of Testing

One of my friend Anil Agrawal is a Quality Leader in C-SAM Solutions. He had sent me an excellent email on classic testing mistakes. Thought I would share with you all. Thanks Anil.

Here it goes...

A first major mistake people make is thinking that the testing team is responsible for assuring quality. This role, often assigned to the first testing team in an organization, makes it the last defense, the barrier between the development team (accused of producing bad quality) and the customer (who must be protected from them).
It's characterized by a testing team (often called the "Quality Assurance Group") that has formal authority to prevent shipment of the product. That in itself is a disheartening task: the testing team can't improve quality, only enforce a minimal level.
Worse, that authority is usually more apparent than real. Discovering that, together with the perverse incentives of telling developers that quality is someone else's job, leads to testing teams and testers who are disillusioned, cynical, and view themselves as victims.

We've learned from Deming and others that products are better and cheaper to produce when everyone, at every stage in development, is responsible for the quality of their work ([Deming86], [Ishikawa85]).

In practice, whatever the formal role, most organizations believe that the purpose of testing is to find bugs. This is a less pernicious definition than the previous one, but it's missing a key word. When I talk to programmers and development managers about testers, one key sentence keeps coming up: "Testers aren't finding the important bugs." Sometimes that's just griping, sometimes it's because the programmers have a skewed sense of what's important, but I regret to say that all too often it's valid criticism. Too many bug reports from testers are minor or irrelevant, and too many important bugs are missed.

Now for the complete article on classic mistakes, click here.

May 10, 2005

Hibernate Vs EJB 2.1(Entity Beans)

I need to add lot of meat to the skeleton which I have created in the previous post i.e. related to J2EE Architect worries. I have all the main sections created and I need to elaborate each one of them. It will take time and I need time to do that :).
I was planning to write on System Deisgn worries, but my boss urgently requested me to write something abt EJB and Hibernate , their differences based on my experience and doing some googling. This was urgently required for some proposal.
I finished this 2 pager and thought that I will share it with you all. It might be of some help.

Hibernate 3.0 & EJB 2.1 (CMP)

Hibernate is a powerful, ultra-high performance Object/Relational (OR) persistence and query service for Java.
  • integrates elegantly with all popular J2EE Application servers and Web containers with out any restrictions.
  • can also be used in stand alone Java applications.
  • supports and implements the EJB 3.0 (JSR 220) persistence standardization.

This comparison between Hibernate and EJB 2.1 is strictly with CMP Entity beans and not with EJB Platform per-say. EJB Platform (Session Beans/ MDBs) offer many advantages such as componentization, remote access to applications, variety of clients support (java/CORBA etc) and asynchronous message model.

The EJB persistence mechanism (CMP Entity beans) has many issues/disadvantages

  • they are heavy weight components
  • high runtime overhead
  • have a poor track record on the performance
  • cumbersome to develop, lot of interface definitions limiting developer productivity
  • Bean partitioning (Each bean a row in some table/Not every row of every table a Bean)
  • Vendor dependent CMP optimizations can help performance, at cost of portability
  • Inheritance not supported
  • Cannot be used for persistence in non-application server environments.
  • There is no dynamic query mechanism to lookup entity beans (finders are specified at compile time).
  • It is not easy to write unit tests for beans as it is not possible to use them outside of the application server.
  • No support for automatic primary key generation.Only relational databases are supported
The above list of disadvantages is the primary reason for the popularity of simple OR frameworks like Hibernate.

Hibernate Features/Advantages
Hibernate is open source product (similar to Struts and Log4j) and not vulnerable to any vendor lock-in and is supported by JBoss Inc.
  • Hibernate works on POJO principles and it is light weight
  • Hibernate is much more easier to use than handwritten SQL/JDBC (i.e. BMP Beans) and much, much easier to use and much powerful than Entity Beans 2.1
  • Hibernate always executes SQL statements using a JDBC PreparedStatement, which allows the database to cache the query plan.
  • Hibernate is able to implement certain optimizations (caching, outer join association fetching, JDBC batching, etc.) much more efficiently than typical handwritten JDBC.
  • You may use Hibernate from servlets or Struts actions, or from behind an EJB session bean facade. In a CMT environment, Hibernate integrates with the JTA Datasource and TransactionManager, as well as JNDI.
  • Hibernate has a very sophisticated second-level cache architecture and supports pluggable cache implementations.
  • Hibernate supports composite keys
  • Hibernate supports instance variables persistence i.e. Java Beans Style properties like getter/setters

Use-ful links: ;)

May 5, 2005

J2EE Architect Worries-Contd..

In the previous post I had chewed on Methodology. Lets move on to other worries. :-)

2. System Design:

  1. Scoping Requirements
  2. System Interfaces
  3. Reviews of UML Artifacts
  4. Functional Documents
  5. Data Modeling

3. Architectural Artifacts:

  1. Functional/Logical System Model
  2. Physical System Model
  3. Component/Packaging Model
  4. Third Party Service Providers and Component libraries
  5. Application Framework
  6. Data Management Strategy
  7. System Transactional requirements
  8. Application Integration & External Systems
  9. Security Requirements
  10. Performance Requirements
  11. Internalization requirements
  12. System Transition Strategy

4. Environments:

  1. Development Environment
  2. Integration testing environment
  3. User Acceptance testing environment
  4. Production Environment

5. Development (my favourite one):

1. Application Framework

2. Coding Standard
3. Use Case Realizations with UML artifacts and functional documents

4. Software Configuration Management
5. Development environment
6. Daily development activities
7. Unit Testing Framework
8. Trainings & Sessions

  • Coding Standards
  • Architecture
  • Application Framework and Programming Model
  • Third Party Component Usage and Specifications
  • Performance Requirements
  • Version Control System

more later..first I need to elaborate on each of the above one ....

May 4, 2005

What an J2EE Architect needs to worry about?

My aspiratation is to become an J2EE Architect and I'm steadily progressing in that direction. As part of my experiecnce with few large J2EE projects/ reading many books on J2EE/googling/reading articles I try to list down the responsiblities/worries of an J2EE Architect. This is WIP document. I will be updating based on the comments and suggestions.

The best architects are good technologists and command respect in the technical community, but also are good strategists, organizational politicians (in the best sense of the word), consultants and leaders.
(I have taken this from here. This is a good site on Software Architecture Discipline)

Here is the list:
1. Methodology
2. System Design
3. Architectural Artifacts
4. Environments
5. Development
6. Build and Deployment
7. Testing
8. Tools
9. Miscellaneous Deliverables

Lets take one by one.
1. Methodology
a. Rational Unified Process (RUP)
b. Agile Methodologies
b. Xtreme Programming (XP)
c. Crystal
d. Feature Driven Development
c. Hybrid of RUP and XP

a Rational Unified Process (RUP)
The Rational Unified Process collects many of the best practices of OO analysis and design to form a process framework with 38 different artifacts. RUP is not generally considered lightweight, although a lightweight configuration called dx (“xp” turned upside down) exists. Of course, not all 38 artifacts are required in either RUP or dx. In fact, the process framework is configurable to as few as two (use cases and code) artifacts. However, the general RUP-based process uses quite a few requirements, analysis, and design artifacts because its developers based this process on the activities of the OOA/D movement.

b. Agile Methodologies

c. Xtreme Programming (XP)
Extreme Programming has been the pioneer in the modern movement toward lightweight processes. XP emphasizes a single major artifact, the code itself. This process uses 3” x 5” cards to capture requirements in user stories and design via CRC (class, responsibilities, and collaboration) cards, the minor artifacts of the process. XP is much more than user stories, CRC cards, and coding, however. Testing frameworks and innovative practices such as pair programming (working in groups of two people) make XP an interesting addition to the field of software development processes

d. Crystal
Crystal is a lightweight process that contains 20 artifacts. This might sound like a heavier process than XP but most of the artifacts are informal and can take the form of “chalk talks” (working problems out on a chalk board), conversations, and e-mails. Of these 20 artifacts, only the final system, the test cases, and the documentation are formal. Crystal divides its artifacts into levels of precision (20,000-foot view, 5,000-foot view, 10-foot view) to allow developers to focus on their objectives.

e. Feature Driven Development
Feature-Driven Development is an incremental approach that uses as few as four artifacts (feature list, class diagram, sequence charts, and code). The FDD process focuses development using two-week iterations to show quick tangible results. Among the contributions this process provides is a semantic-based class diagram template—called the domain neutral component, which differentiates types of classes by color—to aid class designers in developing a domain model.

f. Hybrid of RUP and XP

Secure Socket Layer

HTTPS is HTTP running over Secure Sockets Layer (SSL).
SSL (now up to version 3.0) is a standard protocol proposed by Netscape for implementing cryptography and enabling secure transmission on the Web

The primary goal of the SSL protocol is to

  • provide privacy and reliability between two communicating parties.

The two security aims of SSL are

  • To authenticate the server and the client using public key signatures and digital certificates.
  • To provide an encrypted connection for the client and server to exchange messages securely

SSL runs at the application layer.
SSL uses

  • certificates,
  • private/public key exchange pairs and
  • Diffie-Hellman key agreements


  • Symmetric cryptography is used for data encryption
  • Asymmetric or public key cryptography is used to authenticate the identities of
    the communicating parties and encrypt the shared encryption key when an SSL session is established.

SSL is comprised of three protocols:

  • record protocol
  • handshake protocol
  • alert protocol

The record protocol defines the way that messages passed between the client and servers are encapsulated. At any point in time it has a set of parameters associated with it, known as a cipher suite, which defines the cryptographic methods being used.

The handshake protocol runs on top of the SSL Record protocol. It defines a series of messages in which the client and server negotiate the type of connection that they can support, perform authentication, and generate a bulk encryption key. During a typical SSL session, the server and client exchange several Handshake protocol messages during the transaction. Depending on the chosen encryption type, a server using the SSL protocol uses public-key encryption technologies to authenticate itself to the client.

The alert protocol also runs over the SSL Record protocol. The SSL Alert protocol signals problems with the SSL session ranging from simple warnings (e.g., unknown certificate, revoked certificate, expired certificate) to fatal error messages that immediately terminate the SSL connection. For example, you might receive the You are about to leave a secure Internet connection warning because an SSL client received a closure_notify alert from an SSL server.

Operation of SSL

The client initiates an HTTP request for an SSL tunnel calling HTTPS directly.
By default, SSL uses a number of ports including 443, 643, 1443 and 2443.
For encryption SSL uses

  • RC4-128,
  • Diffie-Hellman 1024,
  • MD5 and
  • Null.

The encryption is carried out at layer 4 i.e. the socket layer.

The major elements in an SSL connection are:

1) The cipher suites that are enabled
2) The compression methods that can be used (the compression algorithms are used to compress the SSL data and should be lossless)
3) Digital certificates and private keys, used for authentication and verification
4) Trusted signers (the repository of trusted signer certificates, used to verify the other entities’ certificates)
5) Trusted sites (the repository of trusted site certificates)

SSL Handshake

The steps involved in an SSL transaction before the communication of data
begins are described in the following list:
1) The client sends the server a Client Hello message. This contains a request for a connection along with the client capabilities, like the version of SSL, the cipher suites and the data compression methods it supports.
2) The server responds with a Server Hello message. This includes the cipher suite and the compression method it has chosen for the connection and the session ID for the connection. Normally, the server chooses the strongest common cipher suite. If the server is unable to find a cipher suite that both the client and server support, it sends a handshake failure message and closes the connection.
3) The server sends its certificate if it is to be authenticated, and the client verifies it. Optionally the client sends its certificate and the server verifies it.
4) The client sends the ClientKeyExchange message. This is random key material, and it is encrypted with the server’s public key. This material is used to create the symmetric key to be used for this session, and the fact that it is encrypted with the server’s public key is to allow a secure transmission across the network. The server must verify that the same key is not already in use with any other client. If this is the case, the server asks the client for another random key.
5) When client and server agree on a common symmetric key for encrypting the communication, the client sends a ChangeCipherSpec message indicating the confirmation that it is ready to communicate. This message is followed by a Finished message.
6) In response, the server sends its own ChangeCipherSpec message indicating the confirmation that it is ready to communicate. This message is followed by a Finished message.

7) Client and Server exchange the encrypted data.

The problems associated with SSL are:

  • It prevents caching.
  • Using SSL imposes greater overheads on the server and the client.
  • Some firewalls and/or web proxies may not allow SSL traffic.
  • There is a financial cost associated with gaining a Certificate for the server/subject device.