Overview

In this documentation, we trace the multiple layers of the iWalk code, to demonstrate how data from the Crawl database gets to the iSee and iKnow web applications. The other direction – such as saving data entered by the Crawl user – is not discussed, though it follows the same layered process in reverse.

There are at least seven layers between the data in the Crawl database and the information that appears in iSee and iKnow. Seven is the lucky number, but there are in fact many ways to enumerate the quantity of layers. We can say there are actually ten or fifteen layers because within each layer there are a number of sub-layers of equal importance and significantly different from the others so as to warrant separate distinction. Or we can say there are really only two layers: the Data / Business layer and the Presentation Layer. But all this is too abstract. What follows is a discussion of the seven major interlocking points along the data cascade.

Sometimes the layers build upon the previous layer – combining fields, adding new data, and reformatting information; other times, the layers are redundant, with nothing really changing in the exchange, the only purpose being that of wrapping functionality or decoupling iWalk from the Crawl database. In all cases, however, the layers are set up to be followed in a very precise, linear fashion with the momentous task of getting data from Crawl (Part 1) into the front end (Part 2).

The first five layers – Part 1 – require a good understanding of Crawl’s data and its functional concepts. The final two layers – Part 2 – require a good understanding of iWalk functionality and web development.

Each layer plays a small but significant role in the overall process. Be aware that each layer accomplishes its task using a different way of thinking, a different vocabulary, a different technology, a different set of data, and a different way of representing that data.

This complexity requires patience, concentration, creativity and adaptability, along with an ease in a number of distinct skills – both functional and technical. And time – there is an estimated two-year learning curve. So, be patient with the process. At times, this document might let slip some criticisms of the iWalk Framework, or frustrations with its code. These are personal to the writer and are not intended to be merely critical: they are added constructively to inform the reader that the learning process is difficult for everybody.

Document Plan

What follows is a general description of each layer, focusing on the overall thinking and technologies. Many details are left to the reader’s own grappling with the code and indeed can only be understood by working directly with the code. In the Appendix, there is a coding example that gives details of each layer, to make the general descriptions even more concrete. So, as you are reading the general comments, feel free to look at the coding examples for each layer to have a better and more concrete understanding of the role and technology of each layer.

The direction used in the majority of this document is bottom up, meaning we look first at the data and then move our way slowly towards the screens of iSee and iKnow. However, as will be seen later in the appendix’s concrete example, from a coding point of view, experience teaches that it is probably better to work backwards, from top to bottom, as this enables the developer to organize and focus his or her query into the seemingly limitless functionality embedded in the lowest levels of the chain, those closest to the database.

The Enumerated Layers

The general chain of command is as follows: Crawl Data to Crawl DL and TL packages to WBP Packages and Procedures to EXT Data Layer to Java Web Services (JWS) to XSLT Front-End Generator to Static Web Pages.

The first layer is Crawl Data, which fetches data from the Crawl database using SQL.

The second layer is Crawl DL and TL packages, which wraps the first layer’s SQL.

The third layer is WBP Packages and Procedures, which is a thin layer over the Crawl Database, mostly using the DL and TL packages but sometimes accessing Crawl data directly with SQL.

The fourth layer is EXT Data Layer, which uses the previous WBP to organize the specific business requests from the front end.

The fifth layer is Java Web Services (JWS), which combines specific calls to EXT Data Layer to come up with the full set of data for a specific web page. This set of data is then converted into a readable XML that will be parsed by the next layer.

The sixth layer is XSLT Front-End Generator, which calls the previous JWS and receives the resulting XML data set, which it then parses into HTML and Java Script.

The seventh layer, as already indicated, can be any number of embedded layers within the previous six layers, or it can legitimately be considered the Static Web Pages installed on the server and which serve to call or help the generated code in the previous XSLT Front-End layer. 

Crawl Data

In some ways, this is the simplest part. With a little effort and SQL know-how, the data that serves Crawl and therefore iWalk is readily available even to the beginner developer. With a well written query, the developer has direct access to all of the information in Crawl.

Things get more complicated once the developer is distanced from this underlying simplicity – as will be seen in the next sections dealing with the various Data and Web layers.

Crawl Database

But first, it is good to start with a short description of the Crawl database. While there are seemingly countless tables – a clearly intimidating prospect – nobody expects the developer to count or know all of Crawl’s tables. Crawl developers, within their respective domains, need only understand a small subset of tables. For the iWalk developer, this subset will be referred to as iWalk relevant tables.

Many relevant tables are rather simple and straightforward. Some are a bit more difficult to understand but can be mastered in only a short time. But there are indeed some complex tables, or sets of tables, due to their large number of records or fields or foreign relations. But even so, every large table has only a small set of fields that are central to that table, such as IDs, foreign keys, and meaningful data like name or key financial data. For any given query, therefore, every table takes on a familiar and simplified role within iWalk. For example, while the COMPANIES table might have hundreds of fields, and hundreds of thousands of records, with many apparent duplicates and other such oddities, a well written query for a specific purpose will sidestep these complexities and the developer will quickly come to a good understanding of how to use the companies table. Same with the VEHICLES and LEAS% tables which are quite central to iWalk.

Thus, iWalk, at the database level, is rather simple and straightforward; however, understanding this data, while crucial, is only a small part of the learning process.

The Crawl database is called by the next layer of the framework, the Crawl DL and TL layers.

The iWalk Database (EXT)

Before advancing to the next layer, it is important to note that there are three database schema in the iWalk framework.

All of the data used in iWalk comes from the Crawl database schema. In this sense, iWalk is like a dumb terminal providing a view to the server side data in Crawl.

However, all of iWalk code is actually located in the database schema called EXT. This schema will be discussed below in the front-end layers. 

Crawl’s TL and DL Packages

The first coded level of the iWalk framework is contained in the Transaction and Data Layer. These packages begin with ‘TL_%’ or ‘DL_%’ respectively and are located in the Crawl database. Note that these packages are used also by Crawl Forms and other processes unrelated to iWalk.

Here is where iWalk fetches all of Crawl’s data.

These packages contain a solid base of Crawl functionality, encapsulating Crawl’s functionality in well-aged, reliable queries and procedures. They also contain additional conditions, calculated data, and special formatting of the data. As any good Transaction and Data layer should do, these packages encapsulate the business meaning of the underlying database, thereby avoiding, among other things, duplication of code or reinvention of already written functionality.

This is the theory. These layers also add a somewhat difficult to understand structure over the original data that requires the developer to understand and operate within a largely undocumented, idiosyncratic, business-intensive coding environment that was written primarily to serve Crawl Forms.

Ideally, this would be the only data layer to concern the iWalk developer. But it is not, as will be seen in the next few layers. And equally ideally, this layer would be a black box. Instead, it is a grey box: “grey” because the developer will need to constantly look at and understand what the code is doing and will sometimes need to ask the owners of this layer to make changes.

These packages are called by the next layer, the WBP. 

WBP Data Layer

This layer is, in theory, a thin layer over the previous Crawl TL and DL packages. However, that thinness fattens, adding quite a large layer of code between the top and bottom of the framework.

WBP was written (I believe) to decouple iWalk from Crawl. It contains packages with functions that call the Crawl DL and TL Layers.

Note that, while WPB uses the DL and TL layers, it can also and does quite often access the Crawl database directly using SQL. WBP therefore does not rely entirely on DL or TL layers to get data from Crawl. There is no apparent technical or architectural barrier to have WBP access the database directly. This is important to know, because it complicates the reading and maintainability of the code.

WBP was written only for iWalk. Furthermore, the WBP layer is the only layer that iWalk uses to access the Crawl database: All access to Crawl functionality must pass through the WBP layer.

Just like the Crawl TL and DL layers, the WBP layer is not within the coding responsibility of the iWalk developer. This means that it too is a grey box and if there is a change required at this layer, it must be done by another team.

One (perhaps obvious) generalization to make at this point is that the Crawl database contains all permanent data, and that the Crawl DL and TL layers contains only a subset of that data, with additional calculated data on top of the saved data. With that in mind, note that WBP is an even smaller subset of the DL and TL layers – it takes only what it needs from the Crawl DL and TL data. And it too adds calculated data. Therefore, when making a typical change – like adding a new field, or modifying its value – the main job in WBP is to figure out how to cascade the changed data between Crawl and front end layers, often enlarging the various subsets involved.

When we speak of the WBP layer, we are always referring to more than one package in the WBP schema to solve an iWalk query. There is rarely a one-to-one relationship with what the developer needs and how to obtain it. The developer, in other words, often needs to combine several WBP calls to formulate a complete query.

Now, if this layer were a thin layer over the Crawl DL and TL layers, the developer would only need to understand the database and DL and TL layers in Crawl. However, WBP is one of the many idiosyncrasies in iWalk that will take some time getting used to. In this case, the difficulties come about in trying to disentangle the 3 layers of database access: SQL, the Crawl DL and TL layer, and the multiple combinations of packages and procedures in the WPB layer.

WBP is called by the EXT layer. 

EXT Data Layer

Put simply, the EXT data layer combines the WBP and DL/TL and SQL layers to build a full data result set that is then used to build the front end.

It important to note that, in terms of framework, there are two distinct framework layers within the EXT schema: the Data Layer that accesses WBP, and the later Front-End Layer that generates the web pages. We are discussing in this section only the data layer.

How does the EXT data layer function? Like WBP, the EXT data layer contains a number of packages that, when properly combined, represent a full business query. Within these combined packages, the EXT uses stored procedures, functions and types that call the WBP layer and serialize its data as data cursors, sending these cursors to the next layer, the Java Web Service. The Java code then turns those cursors into classes which in turn are converted to XML. The resulting XML contains all of the data needed to build the web page (we will discuss JWS in the next section in more detail).

Within this process of combining WBP data and changing it to XML, there are unfortunately many additional business and idiosyncratic logic that the developer needs to understand.

Noteworthy among these is perhaps technically an additional layer: the IL_% packages, which are a thin layer over most WBP calls. So, the process is to call a set of EXT functions that are then calling a set of IL functions which are in turn calling a set of WBP functions.

A question can be asked: If we have already decoupled Crawl from iWalk in the WBP layer, what is the reason for this additional layer? We can add the next level to this question: Why do we have two additional layers over WBP?

One answer to that question comes from understanding the technology used at the front end: iWalk uses XSLT and XML to generate HTML. Therefore, EXT and JWS combine to create the XML. WBP does not do this (although it could).

Here is a good time to refer to the JavaScript functions LoadNewPage and XMLDI_load which are discussed in other KT documentation. These JS functions are used to combine the XML generated in this EXT layer with that of the XSLT of the other EXT layer (the Front End Generator layer), which parses the XML using XSLT to generate the HTML and JS of iSee and iKnow. Layer upon layer upon layer …

Another answer is the need to create a sort of business object layer (the JWS classes and resulting XML) over the more general purpose, horizontal WBP layer. That being said, it will be a mistake to consider this EXT data layer as object-oriented, even if it makes use of the object-oriented class layer in the java web services. EXT is as horizontal as WBP.

So to repeat, as it bears repeating: this EXT data layer combines the WBP and DL/TL and SQL layers to build a full data result set that is then used to build the front end. This layer contains a number of packages in the EXT layer, with functions and types that return data cursors to the Java Web Service. The Java code then turns those cursors into classes which in turn are converted to XML. The resulting XML contains all of the results for the Front End developer to build iSee and iKnow screens.

For more detail, continue reading the following section on the Java Web Services.

Java Web Services (JWS)

Essentially, the starting point of any iSee or iKnow web page is to first generate a set of relevant data from Crawl database. That relevant data is in the XML format already mentioned. Creating this XML is the purpose of the JWS, with the help of the above-described EXT Data Layer.

So, to give an example, in the JWS there is an operation called setMLA(). This function uses parameters and session variables, and specific calls to the EXT data layer, to come up with a set of data to satisfy the needs of the currently active web page. Each piece of data is ultimately retrieved from the Crawl database, but accessed via 4 layers (EXT-WBP-DL/TL-SQL). The operation setMLA(), for example, will make a number of different calls to the EXT layer: For example, there will be a call to get all MLA contracts, another to get company-specific data if we know the company ID. Other calls are used to get active lease information, vehicle information. There are other calls more general, like company, employee, or user details, or security data to determine what the user can do and see.

In a word, whatever the web page needs, there will be an Operation in the JWS that will combine a number of EXT data layer calls (which will use WBP, etc.) to return the data in XML. These calls are all using parameters and session variables to filter the data.

A word on session variables

Between pages, iWalk uses session variables to save data. This creates a context that JWS will use. It is important that any new development respect the current usage of session variables; otherwise, new developments will break later pages in a given user workflow. These session variables are saved directly to the EXT database (SESSION_DATA) during every page refresh. They replace the use of cookies.

There are several hundred session variables, so there is quite a steep learning curve in getting used to them. And there are a number of combined uses of these session variables that make it challenging to figure out the right combination or to modify their behavior.

The XML (The Returned Data)

In the end, the EXT data layer and JWS together have managed to build an XML of data, which JWS then sends to the EXT Front End generator. The good news is that this XML easy to use, on a par with querying the Crawl database as being the simplest parts of the whole iWalk Complex. Unfortunately, this simplicity is entirely lost in the next layer, that of the XSLT layer. 

XSLT, The Front-End Generator Layer

What is XSLT (Parsing the XML)?

*If the reader is already comfortable working with XSLT, skip to the next section.

Just a quick word on XSLT: XSLT is not a programming language. Or if it is, it is not a very good programming language. It was designed to parse XML but not to write an application or build a web page.

If we were to compare its role in iWalk to that of, say, JSP or ASP, it clearly serves the same purpose as these server side scripting languages. This is unfortunate, because while XSLT has strong points in parsing XML, its weakness in its syntax pervades iWalk nearly to the point of unreadability. Consider What Microsoft thinks of XSLT as a scripting language.

To enlarge upon this statement: consider the developer who at this point in the cascaded framework has already worked through a number of complex and idiosyncratic layers to get access to basic Crawl functionality. Now the developer has to make a serious mental shift to that of web page development. The tools in which he or she must do this are PL/SQL and XSLT, neither of which are web development languages.

The unfortunate result of combining the cumbersome syntax of XSLT with the unwieldy programming paradigms of PL/SQL (with its thousands of lines of oddly indented code per procedure) goes a long way in explaining the difficulties and tedium in working with the front end of iWalk. On top of that, every line of the actual technology of the front end (the HTML and JavaScript) is embedded in the PL/SQL and XSLT, making run-time and syntax errors quite difficult to trace.

Putting aside, therefore, the quantity, difficulties, and tedium of the previous data layers, this front-end layer alone could explain some of the strongly felt demotivation of working with iWalk.

Main XSLT Packages for iSee and iKnow

There are two central Front-End packages in iWalk:

–          For iSee, we look at J2_BXSLT.PBD

–          For iKnow we look at J2_CXSLT.PBD

These two packages are the starting points for all iSee and iKnow web pages. They contain the code that generates the basic web page framework as well as most of the content. Some of the content-generation is delegated to other helper packages (for example, J2_DATALOADER, that helps generate the proposal screen in iSee). And there are some security, administrative, and other such ancillary packages to help modularize iWalk front-end functionality.

The XSLT Embedded Within HTML or JavaScript

What does XSLT do? Its primary functional role is to give the front-end developer access to the underlying XML that was so patiently put together in the previous 5 layers (SQL, DL/TL, WBP, EXT, and JWS). This means that every attribute, element and block of elements in the XML is readily available to the front end developer.

XSLT, as a programming language, also contains IF conditions to help generate different HTML depending on the XML data. With XSLT, the developer can create variables from the content of the XML to be used in the generated HTML and/or JS. XSLT can also be used to perform LOOPS through the XML to build list boxes or to search for matches.

The result of the parsing is a built-from-scratch HTML, which forms the entirety of the web page that will be sent back to the client.

In addition to generating HTML, a large amount of JavaScript is generated from scratch within the PL/SQL. One example of this is in the PL/SQL function ContractDetails() in J2_ECXSLT, in which there are JavaScript Functions, like updateContract(), that are generated by the XSLT framework. Another more complex version of JS-creation can be seen in the function check_disable() in the J2_DATAENTRY packagewhich adds an additional layer of complexity by using JavaScript code embedded within the DATAENTRY tables to build a final code structure.

All of this can result not only in run time errors that are hard to find but in syntax errors within the final generated code. This bring us to a final point made here, that in order to debug iSee or iKnow, the developer will often need to use the browser’s development mode to find the code.

A Word on Debugging

There is a separate document that shows how to set up the debugger environment. All methods of debugging employed such as printing to the console, displaying JS Alert messages, using jDeveloper with breakpoints, writing out PL/SQL code points, and as mentioned above, development mode in the browser. 

Static Web Pages

Finally, iSee and iKnow are installed on the server as web applications. So even though many of their pages are dynamically generated, they contain, like all web applications, their own set of static web pages, CSS, images, and scripts.

These can be found in PVCS:

  • For iSee, le b, which is eb_sf.zip
  • For iKnow, le c, which is ec_sf.zip

And these zipped files can be installed on a web server, which is described in associated documentation. See separate documentation on how to deploy these applcations.

All images, scripts, css, etc., can refer or be referred to in the generated html (see sliderobject() called by j2_ecxslt, but the code is actually located on the server in utilities/slider/slider.js). Or vice versa, the static pages can refer to the generated html – or more precisely, start the process that generates the HTML (see StartApp() in script/utilities.js).

Changes in the static layer are usually cosmetic (CSS, Images), or related to generic script functionality (see script and utilties folder).

Done. Framework discussion closed. That’s the bare bones, barely skeletal; to give it form, flesh and blood, the best process is to work with an experienced Crawl developer and to spend quality, focused time on each layer and idiosyncrasy described above, ideally in development not evaluation mode. A model learning process is described in a separate document.  

A Final Word on the Development Process

As already mentioned in the general overview, the above process looks at the process from Bottom to Top, or Database to Front-End; but in fact, for the coding example above, and for coding in general, it is probably far easier to work “backwards” – that is, to design the screen changes first, using place holders for new information, and then working backwards into the various layers to find the information that you need. If the information is already there, fine, your work is done. If, however, as in most cases, the information is missing in one or more layers, then what you can do at this point is add the new field backwards, once again using placeholders – adding new fields to the JWS class or classes and the EXT package or packages but stopping at the WBP layer, which is the responsibility of another team. This process of working backwards, at least in the beginning, is the least intimidating approach.

But note that actual changes at the WBP and Crawl levels (the first 3 layers) are the responsibility of other teams. So in terms of planning, it is good to know what is missing at these layers as soon as possible so that the other teams can get their work done in a timely manner.

Done.