IBM InfoSphere DataStage



IBM InfoSphere DataStage is an ETL tool and part of the IBM Information Platforms Solutions suite and IBM InfoSphere. It uses a graphical notation to construct data integration solutions and is available in various versions such as the Server Edition and the Enterprise Edition.

A data extraction and transformation program for Windows NT/2000 servers that is used to pull data from legacy databases, flat files and relational databases and convert them into data marts and data warehouses. Formerly a product from Ascential Software Corporation, which IBM acquired in 2005, DataStage became a core component of the IBM WebSphere Data Integration suite.

DataStage originated at VMark[1], a spin off from Prime Computers that developed two notable products: UniVerse database and the DataStage ETL tool.


The first VMark ETL prototype was built by Lee Scheffler in the first half of 1996[1].

Peter Weyman was VMark VP of Strategy and identified the ETL market as an opportunity. He appointed Lee Scheffler as the architect and conceived the product brand name "Stage" to signify modularity and component-orientation[2].

This tag was used to name DataStage and subsequently used in related products QualityStage, ProfileStage, MetaStage and AuditStage.

Lee Scheffler presented the DataStage product overview to the board of VMark in June 1996 and it was approved for development.

The product was in alpha testing in October, beta testing in November and was generally available in January 1997.

VMark acquired UniData in October 1997 and renamed itself to Ardent Software[3]. In 1999 Ardent Software was acquired by Informix[4] the database software vendor.

In April 2001 IBM acquired Informix and took just the database business leaving the data integration tools to be spun off as an independent software company called Ascential Software[5].

In November 2001, Ascential Software Corp. of Westboro, Mass. acquired privately held Torrent Systems Inc. of Cambridge, Mass. for $46 million in cash.

Ascential announced a commitment to integrate Orchestrate's parallel processing capabilities directly into the DataStageXE platform. [6].

In March 2005 IBM acquired Ascential Software[7] and made DataStage part of the WebSphere family as WebSphere DataStage.

In 2006 the product was released as part of the IBM Information Server under the Information Management family but was still known as WebSphere DataStage.

In 2008 the suite was renamed to InfoSphere Information Server and the product was renamed to InfoSphere DataStage[8].

•Enterprise Edition: a name give to the version of DataStage that had a parallel processing architecture and parallel ETL jobs.

•Server Edition: the name of the original version of DataStage representing Server Jobs. Early DataStage versions only contained Server Jobs. DataStage 5 added Sequence Jobs and DataStage 6 added Parallel Jobs via Enterprise Edition.

•MVS Edition: mainframe jobs, developed on a Windows or Unix/Linux platform and transferred to the mainframe as compiled mainframe jobs.

•DataStage for PeopleSoft: a server edition with prebuilt PeopleSoft EPM jobs under an OEM arragement with PeopleSoft and Oracle Corporation.

•DataStage TX: for processing complex transactions and messages, formerly known as Mercator.

•DataStage SOA: Real Time Integration pack can turn server or parallel jobs into SOA services.




Saturday, April 3, 2010

WebSphere DataStage XML

Part I. Publishing XML documents from table data

Publishing XML documents based on the existing table data is a common scenario. Sometimes, relational tables or sequential files are required to be transformed to XML hierarchical structures, such as XML documents or XML chunks. In this case, the XML output stage can be used to generate XML output. It uses XPath expressions to map input table fields to certain positions in the output documents.

Sample 1. Generate XML files based on two tables using XML output stage
Figure 1. Job diagram of XML publishing



Overview of sample 1

In sample 1, customer data and contact data has been extracted from two corresponding DB2 tables respectively, as shown in Figure 1. The transformer is used to replace the complex SQL, integrate the data, and feed the joined data to the XML output stage through DSLink6. XML output stage then generates the XML results and saves them to the file system. Figure 1 briefly describes the whole application demo.

The general steps for deployment are:

Define and deploy DB2 tables
Prepare XML structure
Import DB2 table and XML table definitions
Set up DB2 stages with a transformer to provide joined data
Set up XML output stage to generate XML document
Compile and run

Let's examine these steps in detail:

Step 1. Define and deploy DB2 tables

Deploy these tables into the sample database using the DDL shown in Listing 1 on the local DB2 server, and load some sample data, as shown in figures 2.1 and 2.2:

Listing 1. DDL for customer and contact table

--customer table
CREATE TABLE S_CUST
(
CUST_NUM CHARACTER(10) PRIMARY KEY,
CUST_NAME VARCHAR(100) NOT NULL
);
--contact table
CREATE TABLE S_CONTACT
(
CNT_NUM INTEGER
GENERATED ALWAYS AS IDENTITY( START WITH 1, INCREMENT BY 1 )
PRIMARY KEY,
CUST_NUM CHARACTER(10) NOT NULL,
F_NAME VARCHAR(50) NOT NULL,
L_NAME VARCHAR(50) NOT NULL,
EMAIL VARCHAR(100) NOT NULL
);

0 comments: