Les cookies nous permettent de vous proposer nos services plus facilement. En utilisant nos services, vous nous donnez expressément votre accord pour exploiter ces cookies.En savoir plus OK

AS/400 TCP/IP Autoconfiguration: DNS and DHCP Support

AS/400 TCP/IP Autoconfiguration : DNS and DHCP Support

Revenir à l'accueil

 

Au format "texte" :

International Technical Support Organization SG24-5147-00 AS/400 TCP/IP Autoconfiguration: DNS and DHCP Support http://www.redbooks.ibm.com M. Adan, S. Goodrich, A. Grant, M. Hamada, G. Ilmberger International Technical Support Organization SG24-5147-00 AS/400 TCP/IP Autoconfiguration: April 1998 DNS and DHCP Support © Copyright International Business Machines Corporation 1998. All rights reserved Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. First Edition (April 1998) This edition applies to Version 4 Release 2 of OS/400 (5769-SS1 V4R2), Version 3 Release 1 Modification 3 of Client Access/400 for Windows 95/NT (5763-XD1 V3R1M3), Version 4 Release 1 or Version 4 Release 2 of Firewall for AS/400 (5769-FW1) Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Before using this information and the product it supports, be sure to read the general information in Appendix B, “Special Notices” on page 441. Take Note! © Copyright IBM Corp. 1998 iii Contents Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Comments Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Part 1. AS/400 DNS Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Chapter 1. Domain Name System Concepts and Overview . . . . . . . . . . . . .3 1.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.2 Domain versus Zone of Authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.3 Name Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 1.4 Types of Name Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 1.5 Split DNS Concept for Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 1.6 Types of Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 1.7 Types of Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 1.8 Round Robin and Address Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 1.9 For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Chapter 2. AS/400 DNS Server Implementation . . . . . . . . . . . . . . . . . . . . .17 2.1 DNS Software Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 2.2 DNS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 2.3 DNS Server Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.4 DNS Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.4.1 Logging / Service Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.5 DNS Server User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 2.5.1 DNS Server Configuration through Operations Navigator . . . . . . . . .22 2.5.2 Change DNS Attributes Command (CHGDNSA) . . . . . . . . . . . . . . . .23 2.5.3 Start TCP Server *DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 2.6 NSLOOKUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 2.7 Host Table Migration Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 2.8 DNS Server Backup and Recovery Considerations . . . . . . . . . . . . . . . . . .24 Chapter 3. Implementing Primary and Secondary DNS Servers . . . . . . . .25 3.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 3.1.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 3.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 3.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 3.1.4 Scenario Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 3.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 3.2.1 Planning the Primary Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 3.2.2 Creating the Primary Name Server on As1 . . . . . . . . . . . . . . . . . . . .29 3.2.3 Configuring AS1 as a Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . .44 3.2.4 Starting the DNS Server on AS1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 3.2.5 Verifying That the DNS Server is Operational . . . . . . . . . . . . . . . . . .53 3.2.6 Creating a Secondary DNS Server . . . . . . . . . . . . . . . . . . . . . . . . . .57 3.2.7 Primary Name Server Security Considerations . . . . . . . . . . . . . . . . .63 3.2.8 Reconfigure Clients to Use the DNS Server . . . . . . . . . . . . . . . . . . .66 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 iv AS/400 TCP/IP DNS and DHCP Support Chapter 4. Migrating an NT Primary DNS to AS/400 System. . . . . . . . . . . 71 4.1 Migrating NT DNS Server Primary Domain Files . . . . . . . . . . . . . . . . . . . 71 4.1.1 Scenario Objective. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.1 Reviewing Primary DNS Configuration on the NT Name Server . . . . 72 4.2.2 Transferring DNS Files from the NT Server to the AS/400 System IFS 73 4.2.3 Importing the Domain Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.4 Configure Forwarders Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.3 Configuring the NT DNS Server as a Secondary DNS Server . . . . . . . . . 79 4.3.1 Deleting the Primary DNS Configuration . . . . . . . . . . . . . . . . . . . . . 80 4.3.2 Configuring the Secondary Name Server . . . . . . . . . . . . . . . . . . . . . 81 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Chapter 5. Growing Your Domain: Creating Subdomains . . . . . . . . . . . . . 83 5.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.1.1 Scenario Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.1.2 Scenario Advantages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3 Planning to Subdomain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.3.1 Defining the Zone of Authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.4 Method 1: Adding a Subdomain and Maintaining Authority . . . . . . . . . . . 92 5.4.1 Configure AS1 Primary Name Server. . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.2 Configure the Secondary Name Server As5. . . . . . . . . . . . . . . . . . . 95 5.5 Method 2: Adding a Subdomain and Delegating Authority . . . . . . . . . . . . 96 5.5.1 Configuring AS1 as Internal Root. . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.5.2 Removing Subdomain Configuration from the Parent Server AS1 . . 98 5.5.3 Delegating the Subdomain on the Parent Server AS1 . . . . . . . . . . . 99 5.5.4 Delegating the In-Addr.Arpa File on the Parent Server AS1 . . . . . . 102 5.5.5 Configuring the Child Server Otherhost . . . . . . . . . . . . . . . . . . . . . 106 5.5.6 Internal Root Server Configuration on the Child Server . . . . . . . . . 109 5.5.7 Reconfigure the Otherdomain Clients . . . . . . . . . . . . . . . . . . . . . . 110 5.5.8 Verifying DNS with Name Server Lookup . . . . . . . . . . . . . . . . . . . . 111 5.5.9 Method 2’s Secondary Name Server AS5 . . . . . . . . . . . . . . . . . . . 115 5.6 Mail Between Otherdomain.mycompany.com and Mycompany.com . . . 116 5.6.1 AS1 as the Only Mail Server in the Network. . . . . . . . . . . . . . . . . . 116 5.6.2 Otherhost as the Mail Server for Otherdomain.mycompany.com . . 117 5.7 The Child Server Otherhost’s IFS Directory Files. . . . . . . . . . . . . . . . . . 120 5.8 Round Robin/Address Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Chapter 6. Split DNS: Hiding Your Internal DNS Behind a Firewall . . . . 125 6.1 Scenario 1: Configuring Your DNS to Forward Queries to a Firewall . . . 125 6.1.1 Scenario Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.1.2 Scenario Advantages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.2.1 Verify the AS/400 TCP/IP Configuration on AS1 . . . . . . . . . . . . . . 128 6.2.2 Verify the AS/400 Mail Configuration . . . . . . . . . . . . . . . . . . . . . . 130 6.2.3 Firewall Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . 133 v AS/400 TCP/IP DNS and DHCP Support 6.2.4 Updating the Firewall Configuration to Use the Internal DNS . . . . 138 6.2.5 Configuring Forwarders in the Internal DNS. . . . . . . . . . . . . . . . . . 140 6.2.6 Client Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 6.3 Sharing a LAN Adapter Between the AS/400 and Integrated PC Server 144 6.3.1 AS/400 System TCP/IP Configuration . . . . . . . . . . . . . . . . . . . . . . 145 6.3.2 Firewall Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.3.3 Internal DNS Server Configuration . . . . . . . . . . . . . . . . . . . . . 148 6.4 Scenario 2: Multiple Mail Servers Behind the Firewall . . . . . . . . . . . . . . 153 6.4.1 Scenario Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.4.2 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 154 6.4.3 Scenario Advantages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 6.4.4 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.5 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.5.1 Verify the AS/400 TCP/IP Configuration. . . . . . . . . . . . . . . . . . . . . 155 6.5.2 Verify the AS/400 Mail Configuration . . . . . . . . . . . . . . . . . . . . . . . 156 6.5.3 Verify the Firewall Installation and Configuration . . . . . . . . . . . . . . 160 6.5.4 Internal DNS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.5.5 Considerations for Exchanging Mail with Internet Users . . . . . . . . 167 6.5.6 Solving the CC: Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Chapter 7. Providing DNS Services on the Internet . . . . . . . . . . . . . . . . 173 7.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 7.1.1 Scenario Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7.2.1 Planning the ASISP Name Server Configuration . . . . . . . . . . . . . . 176 7.2.2 Create the inc.com Primary Domain Files on ASISP . . . . . . . . . . . 177 7.2.3 Create the msu.edu Primary Domain Files ASISP . . . . . . . . . . . . . 179 7.2.4 Configure the Root Servers on ASISP . . . . . . . . . . . . . . . . . . . . . 179 7.2.5 Create the Secondary Domain Files for mycompany.com on ASISP181 7.2.6 Create the Secondary Domain Files on ASISP2. . . . . . . . . . . . . . . 183 7.2.7 Configure the Root Servers on ASISP2 . . . . . . . . . . . . . . . . . . . . . 184 7.2.8 Configure the Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Chapter 8. DNS Server Tips, Tools, and Problem Determination . . . . . . 185 8.1 Tips and Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.1.1 Tips for Preventing Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.1.2 Tips for Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 8.1.3 Tools for Problem Determination . . . . . . . . . . . . . . . . . . . . . . . . . . 188 8.1.4 AS/400 Job Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 8.1.5 NSLOOKUP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 8.1.6 Dump Server Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 8.1.7 Run Debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 8.1.8 DNS Server QUERYLOG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8.1.9 DNS server Dump Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 8.1.10 Tips on Debugging Mail on an AS/400 System. . . . . . . . . . . . . . . 202 8.2 Problem Symptoms and Probable Causes. . . . . . . . . . . . . . . . . . . . . . . 207 8.3 For Additional Help With Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 vi AS/400 TCP/IP DNS and DHCP Support Part 2. AS/400 DHCP Server Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Chapter 9. DHCP Concepts and Overview . . . . . . . . . . . . . . . . . . . . . . . . 217 9.1 BOOTP, the Predecessor of DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 9.2 DHCP Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 9.3 How does DHCP Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.3.1 How is Configuration Information Acquired? . . . . . . . . . . . . . . . . . 220 9.3.2 How are Leases Renewed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 9.3.3 What Happens when a Client Moves out of its Subnet? . . . . . . . . . 224 9.3.4 How are Changes Implemented in the Network? . . . . . . . . . . . . . . 224 9.3.5 What are BOOTP/DHCP Relay Agents? . . . . . . . . . . . . . . . . . . . . 224 Chapter 10. AS/400 DHCP Server Implementation. . . . . . . . . . . . . . . . . . 227 10.1 DHCP Software Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 10.2 DHCP Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 10.3 DHCP Server Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10.4 DHCP Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10.4.1 Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.5 DHCP Server User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 10.5.1 DHCP Server Configuration through Operations Navigator . . . . . 232 10.5.2 Change DHCP Attributes Command (CHGDHCPA) . . . . . . . . . . . 233 10.5.3 Start TCP Server *DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 10.6 BOOTP-to-DHCP Migration Program . . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.7 DHCP Server Exit Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.8 DHCP Server Backup and Recovery Considerations . . . . . . . . . . . . . . 235 Chapter 11. Start Here: Implementing DHCP in a Simple Network . . . . . 237 11.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.1.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 11.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 238 11.1.5 Network Addressing Scope Planning . . . . . . . . . . . . . . . . . . . . . 238 11.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 11.3 Verify Hardware, Software, and Configuration Prerequisites . . . . . . . . 239 11.4 Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 11.4.1 Configure TCP/IP Interface on the AS/400 System . . . . . . . . . . . 240 11.4.2 Gather Information to Configure the DHCP Server. . . . . . . . . . . . 241 11.4.3 Configure DHCP Server through Operations Navigator . . . . . . . . 243 11.5 Configuring DHCP Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 11.5.1 Configuring DHCP on Windows 95 Clients. . . . . . . . . . . . . . . . . . 249 11.5.2 Configuring DHCP on the IBM Network Station . . . . . . . . . . . . . . 251 11.6 Selecting the Bootstrap Host for the IBM Network Station . . . . . . . . . . 252 11.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Chapter 12. Using Multiple DHCP Servers to Minimize Failures. . . . . . . 261 12.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 12.1.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 12.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 12.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 12.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 263 12.2 Dividing the Address Pool across Two DHCP Servers . . . . . . . . . . . . . 263 12.2.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 vii AS/400 TCP/IP DNS and DHCP Support 12.2.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 12.2.3 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 12.3 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 12.3.1 Verify Hardware, Software, and Configuration Prerequisites . . . . 264 12.3.2 Reduce the Primary DHCP Server IP Address Pool . . . . . . . . . . . 264 12.3.3 Change the Number of Options on the Primary and Backup DHCP Servers 266 12.3.4 Add the Remaining IP Addresses to the Backup Server . . . . . . . . 266 12.3.5 Change the Lease Time on the Primary and Backup DHCP Servers . 269 12.3.6 Start the Primary and Backup DHCP Servers. . . . . . . . . . . . . . . . 270 12.4 Providing Full-DHCP Client Support . . . . . . . . . . . . . . . . . . . . . . . . . . 271 12.4.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 12.4.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 12.4.3 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 12.4.4 Network Addressing Scope Planning . . . . . . . . . . . . . . . . . . . . . . 271 12.4.5 Task Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 12.4.6 Verify Hardware, Software, and Configuration Prerequisites . . . . 272 12.4.7 Enlarge the Primary DHCP Server IP Address Pool . . . . . . . . . . . 272 12.4.8 Add the Remaining IP Addresses to the Backup DHCP Server . . 273 12.4.9 Start the Primary and Backup DHCP Servers. . . . . . . . . . . . . . . . 274 12.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Chapter 13. Multiple Subnets and DHCP Servers . . . . . . . . . . . . . . . . . . 277 13.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13.1.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 13.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 13.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 278 13.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 13.3 Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 13.3.1 Configuring TCP/IP Interfaces on AS1 . . . . . . . . . . . . . . . . . . . . . 280 13.3.2 Gathering Information to Configure DHCP Servers . . . . . . . . . . . 280 13.3.3 Configuring DHCP Server Support in AS1 . . . . . . . . . . . . . . . . . . 285 13.3.4 Configuring TCP/IP Interfaces on AS5 . . . . . . . . . . . . . . . . . . . . . 290 13.3.5 Configuring DHCP Server Support on AS5 . . . . . . . . . . . . . . . . . 291 13.3.6 Start the DHCP Server Support on Both Systems . . . . . . . . . . . . 295 13.3.7 Configuring DHCP Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 13.3.8 Analyzing the DHCP Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 13.3.9 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 13.4 Configuring Subnet B on AS1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 13.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Chapter 14. Multiple Subnets, DHCP Servers, and Relay Agents . . . . . 313 14.1 Scenario Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 14.1.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 14.1.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 14.1.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 14.1.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 316 14.2 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 14.2.1 Planning the TCP/IP Addressing Scheme . . . . . . . . . . . . . . . . . . 317 14.2.2 Gathering Information to Configure DHCP Servers and DHCP Relay Agents 318 viii AS/400 TCP/IP DNS and DHCP Support 14.2.3 Configure the Primary DHCP Server (AS1) . . . . . . . . . . . . . . . . . 326 14.2.4 Configure the Backup DHCP Server (AS2) . . . . . . . . . . . . . . . . . 332 14.2.5 Configure Routing Information on Both DHCP Servers . . . . . . . . 334 14.2.6 Configuring a BOOTP/DHCP Relay Agent . . . . . . . . . . . . . . . . . . 336 14.2.7 Configure the Microsoft NT BOOTP/DHCP Relay Agent . . . . . . . 338 14.2.8 Start the DHCP Servers and BOOTP/DHCP Relay Agents. . . . . . 340 14.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Chapter 15. Configuring Twinax IBM Network Station with DHCP . . . . . 343 15.1 Getting Started: Basic IP over Twinax Configuration . . . . . . . . . . . . . . 343 15.1.1 Scenario Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 15.1.2 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 15.1.3 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 15.1.4 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 15.1.5 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 345 15.1.6 Task Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 15.1.7 Define a TCP/IP Address Range . . . . . . . . . . . . . . . . . . . . . . . . . 345 15.1.8 Configure and Start the DHCP Server on AS2 . . . . . . . . . . . . . . . 346 15.1.9 Start the IBM Network Station . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 15.1.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 15.2 Transparent Subnet Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 15.2.1 ARP and Proxy ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 15.2.2 Twinax Transparent Subnetting . . . . . . . . . . . . . . . . . . . . . . . . . . 356 15.3 Configuring Twinax IBM Network Station with Local DHCP Server . . . 359 15.3.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 15.3.2 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 15.3.3 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 15.3.4 Scenario Network Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . 360 15.4 Task Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 15.4.1 Plan the TCP/IP Addressing Scheme. . . . . . . . . . . . . . . . . . . . . . 362 15.4.2 Carve out 64 Addresses from the Administered Address Pool . . . 363 15.4.3 Configure the DHCP Server AS2 for Twinax Support . . . . . . . . . . 366 15.4.4 Configure and Start the IBM Network Station . . . . . . . . . . . . . . . . 369 15.4.5 Test Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 15.4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 15.5 Configuring Twinax Network Station with a Remote DHCP Server. . . . 373 15.5.1 Scenario Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 15.5.2 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 15.5.3 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 15.5.4 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 15.5.5 Task Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 15.5.6 Configure the Local DHCP Configuration File on AS2 . . . . . . . . . 376 15.5.7 Power on the IBM Network Station. . . . . . . . . . . . . . . . . . . . . . . . 376 15.5.8 Configure and Start BOOTP/DHCP Relay Agent on Local AS/400 System(AS2) 376 15.5.9 Change the DHCP Server Configuration for the Address Pool 10.1.1.x on AS1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 15.5.10 Configure the Twinax Subnet Address Pool on the Remote DHCP Server 380 15.5.11 Start the IBM Network Station . . . . . . . . . . . . . . . . . . . . . . . . . . 382 15.5.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 15.6 Configuring Twinax IBM Network Station Using Transparent Subnetting383 15.6.1 Scenario Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 ix AS/400 TCP/IP DNS and DHCP Support 15.6.2 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 15.6.3 Scenario Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 15.6.4 Scenario Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 15.6.5 Task Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 15.6.6 Planning the IP Address Scheme. . . . . . . . . . . . . . . . . . . . . . . . . 384 15.6.7 Configure As2.mycompany.com. . . . . . . . . . . . . . . . . . . . . . . . . . 386 15.6.8 Configure As5.mycompany.com. . . . . . . . . . . . . . . . . . . . . . . . . . 388 15.6.9 Configure the DHCP Server on As1.mycompany.com . . . . . . . . . 392 15.6.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Chapter 16. Migrating BOOTP Servers to DHCP . . . . . . . . . . . . . . . . . . . 399 16.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 16.2 Scenario 1: Migrating Existing BOOTP to a New DHCP Configuration . 401 16.2.1 Scenario Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 16.2.2 Existing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 16.2.3 Migrating BOOTP to a New DHCP Configuration . . . . . . . . . . . . . 403 16.2.4 Migrating BOOTP to an Existing DHCP Configuration . . . . . . . . . 405 16.2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Chapter 17. DHCP Problem Determination . . . . . . . . . . . . . . . . . . . . . . . 407 17.1 Performing Basic Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 17.1.1 Program Temporary Fixes (PTFs) . . . . . . . . . . . . . . . . . . . . . . . . 407 17.2 Starting and Reading the DHCP Logging Utility . . . . . . . . . . . . . . . . . . 407 17.2.1 Starting the DHCP Logging Utility . . . . . . . . . . . . . . . . . . . . . . . . 407 17.2.2 Reading the DHCP Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 17.2.3 Finding the Incoming DHCPDISCOVER Data in the Log . . . . . . . 413 17.2.4 Finding and Reading the DHCPOFFER Information in the Log . . 415 17.2.5 Finding and Reading the DHCPREQUEST and DHCPACK Information 417 17.3 Starting, Formatting, and Decoding an AS/400 Communication Trace . 419 17.3.1 Start the AS/400 Communication Trace . . . . . . . . . . . . . . . . . . . . 419 17.3.2 Stopping the AS/400 Communication Trace . . . . . . . . . . . . . . . . . 420 17.3.3 Reading and Decoding the AS/400 Communications Trace Data . 421 17.4 Symptoms, Problems, and Resolutions . . . . . . . . . . . . . . . . . . . . . . . . 423 17.5 DHCP Server Performance Considerations . . . . . . . . . . . . . . . . . . . . . 428 Appendix A. Mail Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431 A.1 Basic Mail Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431 A.2 Mail Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433 A.2.1 Implementing Mail Forwarding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434 A.3 Processing Inbound Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437 A.4 Processing Outbound Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438 Appendix B. Special Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .441 Appendix C. Related Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443 C.1 International Technical Support Organization Publications . . . . . . . . . . . . . .443 C.2 Redbooks on CD-ROMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443 C.3 Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443 C.4 Web Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443 How To Get ITSO Redbooks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 How IBM Employees Can Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . .445 How Customers Can Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446 x AS/400 TCP/IP DNS and DHCP Support IBM Redbook Order Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .447 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 ITSO Redbook Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 © Copyright IBM Corp. 1998 xi Preface This redbook describes the new Domain Name System (DNS) server and Dynamic Host Configuration Protocol (DHCP) server support that are included in OS/400 V4R2. The information in this redbook helps you install, tailor, configure, and troubleshoot the new DNS and DHCP support on the AS/400 system through examples that evolve from simple to more complex scenarios. It also contains examples that show the integration of the new DNS server support with mail and Internet firewall implementation on the AS/400 system. Scenarios are included to show the use of DHCP to automate the configuration of clients in a TCP/IP network including LAN and twinax-attached IBM Network Stations. This book is designed to show the use of the AS/400 system implementation of DNS and DHCP through examples. It also references other publications that contain detailed information on DNS, DHCP, and IP addressing. The intended audience for this redbook includes the system or network administrator who plans, configures, and maintains TCP/IP AS/400 networks. The Team That Wrote This Redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization Rochester Center. Marcela Adan is a Senior International Technical Support Representative at the International Technical Support Organization, Rochester Center. She writes extensively and teaches IBM classes worldwide on all areas of AS/400 Internet technologies and system management. She has held several positions as field technical support representative, network administrator, developer, and consultant. Andrew Grant is a communications specialist working for IBM Managed Operations Group in New Zealand. He has 8 years of experience with IBM mid-range systems, communication, and PC connectivity. His main area of expertise is the design, implementation, and support of large, multi-platform networks including host inter-connectivity and desktop-to-host configuration and trouble shooting over a variety of communication protocols. Susan M. Goodrich is a Staff Software Analyst with IBM AS/400 Software Service and Support in Rochester, MN. She has 5 years of experience in the area of SNA and TCP/IP communications. Prior to this assignment, she worked as a Staff Systems Engineer in the IBM Marketing organization specializing in S/36, S/38, and AS/400 systems. Masahiko Hamada is an I/T specialist in IBM Japan. He has 11 years of experience with IBM mid-range systems. His areas of expertise include OO application development, AS/400 connectivity to Microsoft Windows 95/NT, and Client Access/400. He developed ToolBox/400 used in Japanese environments. Currently, his focus is on AS/400 Internet technologies. He has written several technical documents and taught classes in the U.S., Europe, and Japan. xii AS/400 TCP/IP DNS and DHCP Support Guenter Ilmberger is an Advisory Technical Support Specialist with IBM Germany. He has 30 years of experience in data processing, including 25 years with IBM. His expertise is in all areas of AS/400 communication and systems management. He frequently conducts presentations at conferences and teaches several workshops on AS/400 communication and systems management topics. Thanks to the following people for their invaluable contributions to this project: Suehiro Sakai Fant Steele International Technical Support Organization, Rochester Center Joseph Caldwell John Corcoran Gary Diehl Scott Evans Frank Gruber Steve Gruber Susan Hall Joseph Miller Francis Pflug IBM Endicott Laboratory Janice Glowacki Kent Hofer Mark McKelvey A.J. Meyers Marion Pitts George Romano Ray Romon Daryl Spartz IBM Rochester Laboratory Peggy Warley IBM Product Support Services The editors of this redbook were: Lois Douglas Scott Kalar Jenifer Servais xiii Comments Welcome Your comments are important to us! We want our redbooks to be as helpful as possible. Please send us your comments about this or other redbooks in one of the following ways: • Fax the evaluation form found in “ITSO Redbook Evaluation” on page 457 to the fax number shown on the form. • Use the electronic evaluation form found on the Redbooks Web sites: For Internet users http://www.redbooks.ibm.com For IBM Intranet users http://w3.itso.ibm.com • Send us a note at the following address: redbook@us.ibm.com xiv AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 1 Part 1. AS/400 DNS Support Domain Name System (DNS) handles the mapping of human-friendly names to internet address computers. DNS is also the mechanism used in the Internet to advertise and access a variety of information about hosts. It is used by all internetworking software, including mail, FTP, TELNET, and Internet Firewall. Part 1 of this book provides an overview of DNS basic concepts and explains the DNS implementation in the AS/400 system through case studies. 2 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 3 Chapter 1. Domain Name System Concepts and Overview This chapter provides an overview of Domain Name System (DNS) concepts and components. Our intention is to summarize the concepts you need to understand to implement DNS on the AS/400 system. We refer many times throughout this redbook to DNS and BIND by Albitz & Liu. This book is a MUST for DNS administrators. For more information on the AS/400 implementation of DNS server support, refer to TCP/IP Configuration and Reference, SC41-5420-01. 1.1 Overview The Domain Name System is a distributed database. This allows local control of the segments of the entire database, and data in each segment are also available across the entire network through a client/server scheme. The structure of the DNS database is similar to the structure of a file system. The whole database or file system is pictured as an inverted tree with the root at the top. Each node in the tree represents a partition of the database. Each domain or directory can be further divided into partitions, called subdomains (such as the file system's subdirectories). The domain name space is "tree" structured. The top-level domains divided the Internet domain name space organizationally. Examples of top-level domains are: • com: Commercial organizations, such as IBM (ibm.com), CNN (cnn.com), mycompany (mycompany.com). ibm is a subdomain of the top-level domain com. • edu: Educational organizations, such as University of Minnesota (umn.edu), New York University (nyu.edu). • gov: Government organizations, such as the Federal Bureau of Investigation (fbi.gov), and the National Science Foundation (nsf.gov). The tree is limited to 127 levels; this is a limit on subdomains, although there is no limit on the number of branches at each node. Each node in the tree is labeled with a name (see Figure 1). The root has a null label (" "). The full domain name of any node in the tree is the sequence of names on the path from the node up to the root with a dot between node names. For example, in Figure 1, if you follow the arrows from the bottom label to the top, from the host: www to the root label, you can form the full domain name for that host: www.as400.ibm.com. 4 AS/400 TCP/IP DNS and DHCP Support Figure 1. DNS Name Space In DNS, each domain can be administered by a different organization. Each organization can then break its domains into a number of subdomains and dole out the responsibility for those domains to other organizations. This is because DNS uses a distributed database where you can manage your own domain (company.com), or parts of the name space (subdomains) can be delegated to other servers (department.company.com). In Chapter 5, “Growing Your Domain: Creating Subdomains” on page 83, we discuss delegating a subdomain to another DNS server. The DNS servers responsible for the top level Internet domains such as com are also called Internet root servers that manage information about the top-level domains. For example, the Internet's Network Information Center runs the edu domain, but assigns U.C. Berkeley authority over the berkeley.edu subdomain. Domains can contain both hosts and other domains (their subdomains). For example, the ibm.com domain contains hosts such as www.ibm.com, but it also contains subdomains such as as400.ibm.com. Domain names are used as indexes into the DNS database. Each host on a network has a domain name with a DNS server that points to information about the host. This information may include an IP address, information about mail routing, and so on. Why all this complicated structure? It is to solve the problems that a host table has. For example, making names hierarchical eliminates the problem of name collisions. Domains are given unique domain names, so organizations are free to choose names within their domains. Whatever name they choose, it does not conflict with other domain names, since it has its own unique domain name. For example, we can have several hosts named www such as www.ibm.com and www.yahoo.com because they are in different domains managed by different organizations. See Figure 2. com edu gov mil berkeley Managed by the Network Information Center Managed by UC Berkeley www.as400.ibm.com. www as400 ibm org berkeley.edu. subdomain . . . . . . . . . . . . " " Domain Name System Concepts and Overview 5 Figure 2. Hosts With Same Names in Different Domains We can have a host in the same domain that also has the same host name such as www.ibm.com and www.as400.ibm.com because they belong to different subdomains. 1.2 Domain versus Zone of Authority The concept of domains versus zones of authority can be a confusing one. We try to explain it in this section. One of the main goals of the design of the Domain Name System is decentralization. This is achieved through delegation. The central DNS administrator in your company administering the company’s domain can divide it into subdomains. Each subdomain can be delegated to other administrators. This means that the administrator delegated to becomes responsible for maintaining the subdomain. A domain is a subset or subtree of the name space tree. A subdomain is a subset of the domain. Figure 3 on page 7 shows the domain mycompany.com as a subset of the .com name space. Under mycompany.com, there are other subdomains such as endicott.mycompany.com, rochester.mycompany.com, and otherdomain.mycompany.com. Name Servers are programs running on a system, such as the AS/400 system, with DNS support. In Figure 3 on page 7, as1.mycompany.com, rst.rochester.mycompany.com, and otherhost.otherdomain.mycompany.com are hosts running name server programs; they are called Domain Name System (DNS) servers or simply name servers. Name servers have information about some part of the domain name space called a zone or zone of authority. Both domains and zones are subsets of the domain name space. A zone contains host information and data that the domain contains excluding the information that is delegated somewhere else. If a www.ibm.com www as400 ibm www.yahoo.com yahoo com gov mil edu org ibm.com. ibm.com node com domain domain www.as400.ibm.com. " " 6 AS/400 TCP/IP DNS and DHCP Support subdomain of a domain is not delegated, the zone contains host information and data for the subdomain. Name servers have complete host information and data for a specific zone. Name servers are said to be authoritative for the zone for which they have this complete host information and data. Refer to Figure 3. The mycompany.com domain is divided into the subdomains endicott.mycompany.com, rochester.mycompany.com, and otherdomain.mycompany.com. The zone mycompany.com contains the hosts: as1.mycompany.com, as2.mycompany.com, as5.mycompany.com, and NTserver1.mycompany.com. It also contains the host information and data in the subdomain endicott.mycompany.com: host1.endicott.mycompany.com and host2.endicott.mycompany.com. The subdomain endicott.mycompany.com has not been delegated, and its host information and data remain in the mycompany.com zone. The administration of the endicott.mycompany.com is the responsibility of the mycompany.com administrator. AS1.mycompany.com is the name server that has complete host information and data for the mycompany.com zone of authority. The zone mycompany.com does not contain information in the subdomains that have been delegated. rochester.mycompany.com is a subdomain of mycompany.com and its administration has been delegated. The zone rochester.mycompany.com includes host information and data in the subdomain rochester.mycompany.com: rst.rochester.mycompany.com, host1.rochester.mycompany.com, and host2.rochester.mycompany.com. rst.rochester.mycompany.com is the DNS server that has complete host information and data for the rochester.mycompany.com zone. otherdomain.mycompany.com is a subdomain of mycompany.com and its administration has been delegated. The zone otherdomain.mycompany.com includes host information and data in the subdomain otherdomain.mycompany.com: otherhost.otherdomain.mycompany.com, otherprinter.otherdomain.mycompany.com, and otherserver.otherdomain.mycompany.com. otherhost.otherdomain.mycompany.com is the DNS server that has complete host information and data for the otherdomain.mycompany.com zone. Section 5.5, “Method 2: Adding a Subdomain and Delegating Authority” on page 96 discusses a scenario in which a subdomain is delegated to another DNS server. Domain Name System Concepts and Overview 7 Figure 3. Domain, Subdomain, Delegation, and Zone of Authority 1.3 Name Resolution Programs called name servers comprise the server half of DNS's client/server mechanism. Name servers contain information about some segment of the database and make it available to clients, called resolvers. The Domain Name System has two major components: • NAME SERVERS are programs that hold information about the domain name space. A name server may cache information about any part of the domain tree but, in general, a particular name server has complete information about a subset of the domain space and pointers to other name servers that can be used to lead to information from any part of the domain tree. The part of the domain space the name server has complete information for is called a zone. It is said that the name server is auhoritative for that zone. Name servers can be authoritative for multiple zones. • RESOLVERS are programs that extract information from name servers in response to client requests. Resolvers must be able to access at least one name server and use that name server's information to answer a query. A resolver is typically a system routine that is directly accessible to user programs. No protocol is necessary between the resolver and the user program. Mapping names to addresses, a process called domain name resolution, is provided by independent, cooperating systems called servers. A name server is a server program answering requests from clients called a name resolver. Each name resolver is configured with a name server to use (and possibly a list of alternatives to contact if the primary is unavailable). com mycompany.com otherdomain.mycompany.com mycompany.com Zone of Authority otherdomain.mycompany.com Zone of Authority AS1 otherhostotherprinter otherserver AS2 AS5 NTserver1 rochester.mycompany.com host1 host1 host2 host2 rst endicott.mycompany.com rochester.mycompany.com Zone of Authority 8 AS/400 TCP/IP DNS and DHCP Support Figure 4 shows schematically how a program uses a name resolver to convert a host name to an IP address on the Internet. A user provides a host name, and the user program uses a library routine, called a resolver, to communicate with a name server that resolves the host name to an IP address and returns it to the resolver, which returns it to the main program. The name server may obtain the answer from its name cache (if it has tried to resolve the name before), its own database, or another name server. In Figure 4, the resolver sends a query for www.as400.ibm.com to its DNS server (in the figure, labeled primary name server). If the query is for information out of the name server’s zone of authority (it does not know the answer), the name server sends another query to the Internet root name server, which responds back, "I don’t know but query this next DNS server (the com DNS server)." And the query is iterated to various DNS servers down the "com" branch of the Internet DNS name space until the DNS server is found that is authoritative (is responsible for) the as400.ibm.com domain. This last DNS server has the answer and sends the response back to the original DNS server the resolver asked for, which passes the response back to the resolver. Figure 4. Name Resolution Example Recursive versus Iterative Queries There are two types of DNS queries: recursive or iterative. Figure 4 shows an example of one recursive query and several iterative queries. The first query from the resolver to the primary name server is a recursive query. A recursive query requests that if the name server does not know the answer to the query that it query other name servers until it finds the answer and then sends the answer back to the resolver. Notice in Figure 4, the primary name server did a lot of work: it kept querying other name servers on behalf of the resolver until it could supply the answer. A DNS server is configured to accept recursive queries or only accept iterative queries. The primary name server in Figure 4 was configured to allow recursive queries. The other name servers queried in Figure 4 (root name server, com name server, ibm.com name server) were not configured to allow recursive queries. When the primary name server queried the root name server, the query was an iterative Domain Name System Concepts and Overview 9 query. This means the root name server responded to the query with the best information it had, which was, "I don’t know but check the next DNS server: com name server." The recursive query versus iterative query only comes into play when the name server queried does not know the answer to the query. From the example in Figure 4, we cannot tell if the as400.ibm.com name server is configured to allow recursive queries because this name server held the answer for the primary name server and responded with the answer. 1.4 Types of Name Servers Primary name server: This server is the server that the hosts in the zone of authority are configured on. It is the server that the DNS administrator configures and maintains. When this server gives responses to queries from its primary domain files, the responses are called authoritative. A name server for a primary domain reads the primary domain configuration information directly from files configured by DNS administrator. Secondary name server: This server has the same information as the primary name server. However, instead of getting its information directly from the DNS administrator configuring it, it gets its information from another name server through zone transfers over the network. The information that a secondary name server obtains from a zone transfer is read into cache as is data stored from queries. A zone transfer is a TCP/IP transfer of domain files from another DNS server (called a master name server). This is done automatically when the secondary name server starts and also when the secondary name server detects its domain files are downlevel from the master name server’s domain files. The zone transfer is initiated from the secondary name server. The zone transfer cannot take place if the master name server is not active. A secondary name server is used for two reasons: spreading the DNS query workload over more than one server and as a backup in case the primary name server stops responding. When a client is configured with more than one DNS server and the first name server (the primary) does not respond, the client can query the second name server (the secondary). When the secondary name server gives out a response to a query, the response is also called authoritative. In other words, an answer from a secondary name server is considered to be just as "good" as if the answer came from a primary name server Master name server: A master name server is the name server that a secondary name server gets its zone transfer from. A master name server can either be a primary name server or another secondary name server. A DNS server can be a primary name server for one or more domains as well as a secondary name server for one or more domains. It can be a name server for primary and secondary domains. Note 10 AS/400 TCP/IP DNS and DHCP Support Caching-only name server: A name server that does not have authority over any zone is called a caching-only name server. It gets all of its information by querying. A caching-only name servers’s responses are always non-authoritative. Authoritative name server: A server that is considered to be authoritative for a domain is either the primary server for that domain or a secondary server for that domain. Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25 shows a scenario that configures a primary and a secondary DNS server. If another name server or a client queries either the primary or the secondary name server for information that they are authoritative for, the response is considered to be authoritative. Can a name server that is not authoritative over a domain give a response to a client about that domain and have that response considered an authoritative response? The answer is yes. If the non-authoritative server does not know the answer and queries an authoritative name server on behalf of the client and then returns the answer to the client, this response is considered to be authoritative. The non-authoritative name server caches this information. If a second client requests this same information from the non-authoritative name server (and this information is still in its cache), the name server gives the response to the client but now this same information is labeled non-authoritative. Why? Because the information in the response this second time came out of the name server’s cache. Another way of saying this is: a non-authoritative response at some point came out of a name server’s cache. Parent and child name servers: The concept of parent and child domains is equivalent to the concept of domain and subdomain: once your domain grows to a certain size, you may need to distribute management by delegating authority of part of your domain to one or multiple subdomains. The upper-level domain is the parent and its subdomains the children. The name server authoritative for the parent domain is the parent name server and the one authoritative for the subdomain is the child name server. For example, in Figure 3 on page 7, OTHERDOMAIN is a subdomain of mycompany.com. If a DNS server, AS1, is configured to be responsible for the mycompany.com zone of authority and the authority for the zone OTHERDOMAIN.mycompany.com is delegated to another DNS server, OTHERHOST, then AS1 is considered to be the parent name server and OTHERHOST is considered to be the child name server. A scenario in which authority is delegated from a parent to a child name server is covered in Section 5.5, “Method 2: Adding a Subdomain and Delegating Authority” on page 96. Root name servers: Internet root name servers know where name servers authoritative for the top-level domains are and most of the Internet root name servers are authoritative for the top-level organizational domains (.com, .edu, .net, and so on). The top-level domain servers have information about the second-level domain a given domain is in. A company can implement internal root name servers. In this case, given a query for a company’s subdomain, the internal root name server can provide information for the second-level subdomain the queried subdomain is in. Domain Name System Concepts and Overview 11 A root name server is configured in a lower level name server to help it to navigate the name space tree top down when it cannot answer a query with authoritative data or data in its cache. If we use the example discussed in the previous section, the DNS server OTHERHOST is authoritative for the zone OTHERDOMAIN.mycompany.com shown in Figure 3 on page 7. AS1 name server is authoritative for the mycompany.com zone of authority AND is configured as internal root for the whole company’s name space. The internal roots can run on host systems all by themselves or a given host can perform double duty as an internal root and as an authoritative name server for other zones. If OTHERHOST cannot answer a query, it asks its root name server, which is AS1, the DNS server at the top of the INTERNAL name space tree. We stress INTERNAL because in this example, these DNS servers are only part of an internal network. We are assuming that the network does not have Internet access; thus, the Internet "com" node is not part of this DNS name space tree. Therefore, the DNS server AS1 in domain mycompany.com is at the top of tree. A root name server can be thought of as the name server at the top of the DNS name space tree. Just remember that the DNS name space tree may be different, depending on whether the network is an internal network or if the network includes the Internet DNS name space. An example of using Internet root name servers was covered in Section 1.3, “Name Resolution” on page 7. In this case, the top of the DNS name space tree really was the top of the Internet name space tree and the root name servers used were the Internet root name servers. Forwarders: A DNS server can be configured to send the queries it does not know the answer to, to a DNS server called a forwarder name server. Whereas going to a root name server for help in answering a query can be thought of as going to the top of the DNS name space tree, going to a forwarder can be thought of as going side-ways in the DNS name space tree for help. The DNS administrator configures which DNS server is the forwarder. Usually several DNS servers are configured to have the same forwarder. Then the forwarder name server is configured with the root name servers (for example, the Internet root name servers). If the forwarders cannot answer the query, they query the root name servers, get the answer, and cache it. This way, a forwarder name server can build up a large cache of information. As the cache increases, chances are that the forwarder will receive a query in which it has a cached answer for. This, in turn, reduces the number of times a root name server needs to be queried. Using a forwarder name server is an opportunity to build a large cache of information on one (or just a few) name server. In Chapter 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125, we configure an internal DNS server to forward unresolved queries to the company’s firewall DNS server. 1.5 Split DNS Concept for Firewalls When constructing a firewall, we use Domain Names Services in a particular way so that a company’s internal users can locate the IP addresses of all systems (internal and public) while users on the Internet can only locate the IP addresses 12 AS/400 TCP/IP DNS and DHCP Support of the company’s public systems. This is part of our effort to hide the company’s internal network information from the Internet. It is not necessary to expose a company’s internal network to the Internet. A technique called Split DNS may be used to only expose the company’s public machines to the Internet. Split DNS uses two DNS servers, an internal DNS for secure and private names, and a firewall DNS for the company’s "public" names. The internal DNS server manages the company’s internal IP data. The firewall DNS is the only company name server containing information visible from the Internet. Only some of the company’s hosts need to be known by the Internet: the e-mail relay, the WWW public server, and the firewall name server itself. The Internet Service Provider (ISP) may provide DNS support for the public hosts in addition to or instead of the firewall DNS. The internal name server forwards queries for information it cannot resolve to the firewall DNS server. An AS/400 system at release R4V2M0 can now be a company’s internal DNS. Once the AS/400 name server is configured, it contains files of all the company’s internal hosts. These files map host names to IP addresses (or vice versa). It does this for a particular domain that it is responsible for (called a zone of authority). For example, the IP address of the host with host name client1.private.mycompany.com is 192.168.67.3. The internal name server lets all of the company’s internal hosts locate each other by name in the TCP/IP network. For protection from the Internet, a company also can use a firewall DNS server. The firewall DNS server’s zone of authority are the company’s hosts that are public. These are the hosts that the company wants to make visible on the Internet. The split DNS concept is used in the configuration scenario discussed in Chapter 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125. 1.6 Types of Files Primary domain files: These files are the files configured on the primary name server. On the AS/400 system, they are contained in the IFS directory: /QIBM/UserData/OS400/DNS. Primary domain files have a .DB extension. Secondary domain back up files: These files contain information that is acquired from zone transfers from the primary name server. They exist on the secondary name server. A secondary name server loads these files and uses them to answer queries provided the zone transfer was successful. Forward mapping files: Forward mapping primary domain files reside on the primary name server. They contain all data for mapping host names to IP addresses in a zone. A DNS server is authoritative for a certain part of the DNS name space tree. This part of the tree is called a zone or the DNS server’s zone of authority. Domain Name System Concepts and Overview 13 Reverse mapping files: The reverse mapping primary domain files reside on the primary DNS server. They contain the information for mapping IP addresses to host names in a zone. They are also called the in-addr.arpa files. An example of a reverse mapping file is the 69.5.10.in-addr.arpa file. This is the file a DNS server uses if a client resolver queries with an IP address of 10.5.69.222 and asks the DNS server to supply the host name belonging to that IP address. The 69.5.10.in-addr.arpa file also resides in the AS/400’s IFS directory /QIBM/UserData/OS400/DNS with a file name of 69.5.10.in-addr.arpa.DB. Boot file: The boot file is the file the DNS server first reads when it starts. It contains information such as: • The type of name server • The zones that this name server is authoritative for • Where (file location) the name server should get its information The boot file is also located in the /QIBM/UserData/OS400/DNS directory. Cache file: The cache file contains information about the root name servers. This is where the DNS server should go when it cannot resolve a query itself. This file is located in the /QIBM/UserData/OS400/DNS directory. In later chapters, we say that a name server "caches" information it receives from another name server. This is a way a name server "remembers" information so if it receives a query from a client for the same host, it can respond with an answer from its cache and not query the authoritative name server again. It is important to understand that this cached information is not contained in the /QIBM/UserData/OS400/DNS/CACHE file. The CACHE file contains information about root servers. Local file: The local file contains the PTR record for the local loopback interface. The loopback interface, also known as localhost, has the IP address of 127.0.0.1. Hosts use the 127.0.0.1 IP address to direct TCP/IP traffic to themselves. Every forward mapping primary domain file should be configured with the host localhost with an IP address of 127.0.0.1. TIP The presence of the Boot file in the IFS directory /QIBM/UserData/OS400/DNS determines whether or not the Operations Navigator DNS configuration presents the user with the DNS Wizard windows. If the AS/400 DNS has never been configured, the Boot file does not exist and the first time a user clicks on DNS configuration within Operations Navigator, the Wizard windows are presented. Wizard creates the Boot file. NOTE 14 AS/400 TCP/IP DNS and DHCP Support 1.7 Types of Records The information contained in forward and reverse primary domain files are organized into records called resource records. There are several types of resource records and we try to explain the most common ones in this section. The following list is not a complete list. For more details on resource records, see the second edition of DNS and BIND by Albitz & Liu. • A record: An A record is a record that maps a host name to an IP address. There is one A record for every host configured in the DNS server. Consequently, a query that supplies the host name and asks for the IP address is sometimes called an A record query. A records are contained in the forward mapping primary domain file. This type of query is also called a forward mapping query. • PTR record: A PTR record is a record that maps an IP address to a host name. There is usually one PTR record for every host configured in the DNS server. These records are located in the reverse mapping primary domain files, which are also called the in-addr.arpa files. A query supplying the IP address and asking for the host name is sometimes called a reverse mapping query, a reverse lookup, or an in-addr.arpa query. • SOA record: The first record in the forward and reverse mapping primary domain files is the SOA record. The SOA record marks the zone of authority in the domain name space. It contains the domain name, the name of the DNS server that is primary for this zone of authority, and the e-mail address of the zone’s technical contact. The SOA record also contains the file’s serial number. The serial number can be thought of as the change level of the data in this zone. In other words, if a DNS configuration change is made to this zone, the serial number must be incremented (Operations Navigator does this automatically). Also, the SOA record contains refresh timers, retry rates, and expire timers, all having to do with secondary name servers. These terms are further explained in Section 3.2.6.4, “Controlling Zone Transfer Frequency” on page 60. Lastly, the SOA record contains the default TTL or time to live. This number controls how long another name server can cache the information supplied out of this name server’s zone data. There can be a TTL specified on each resource record, which overrides the TTL specified in the SOA record. • CNAME record: The CNAME record defines the canonical name of an alias. It is used to specify an alias name for a host. • MX record: The MX record defines a mail exchanger host for a particular domain. This record is used by SMTP to send mail. • NS record: The NS record defines a name server to this name server, either itself or another name server. The other name server can be a name server authoritative over another domain. Or the other name server can be a secondary name server to this same zone of authority. It is the NS records that allow each name server shown on the right side of Figure 4 on page 8 to tell the primary name server where to query next when it is searching for the answer to the resolver’s query. NS records allow a DNS server to find other DNS servers authoritative for other zones. Domain Name System Concepts and Overview 15 1.8 Round Robin and Address Sorting The concept of round robin and address sorting has to do with how a DNS server responds when it receives an A record query for a host that is multi-homed and has two IP addresses. A multi-homed host is attached to at least two networks. The DNS server always includes both IP addresses in its response, but which IP address is given first depends on the location of the client making the query: 1. If the client is physically located in one of the networks that the host it is querying for is located in, the DNS server lists the IP address of that network first in its response. Since clients generally try the IP address that is listed first in the response, this address sorting by the DNS server is beneficial because using the host’s closer IP address provides better performance. 2. If the client is physically located on a network remote to either network that the host it is querying for is located in, the DNS server alternates which IP address it lists first in the response. The next time the name server is queried for the same host from a client that is remote to the host, the other IP address is listed first in the response. This IP address rotation in the DNS server responses is called round robin. A detailed example of round robin and address sorting is discussed in Section 5.8, “Round Robin/Address Sorting” on page 121. 1.9 For More Information When a DNS administrator is learning about DNS and how to configure the DNS server on the AS/400 system, we also recommend referring to several other resources on DNS that complement this redbook: • TCP/IP Configuration and Reference, SC41-5420-01 • Operations Navigator online help • DNS and BIND by Albitz & Liu, Second Edition, ISBN 1-56592-236-0 • RFC 1034 (Domain names concepts and facilities), RFC 1035 (Domain names implementations and specifications), and RFC 1912 (Common DNS Operational and Configuration Errors). • AS/400 Internet Security: IBM Firewall for AS/400 , SG24-2162. • comp.protocols.dns.bind newsgroup, which can be located on the Internet by entering the URL www.dns.net/dnsrd and clicking on the Newsgroup link. Or alternatively, you can locate the same newsgroup by issuing a Find from the URL www.dejanews.com. 16 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 17 Chapter 2. AS/400 DNS Server Implementation This chapter describes the implementation of the AS/400 DNS server. 2.1 DNS Software Prerequisites Native DNS support on the AS/400 system in V4R2 requires the following products: • 5769-SS1 OS/400 V4R2 option 31 - Domain Name System • 5763-XD1 V3R1M3 - Client Access for Windows 95/NT The AS/400 DNS implementation is a port of Berkeley Internet Name Domain (BIND) 4.9.3 2.2 DNS Installation Installing DNS support on your AS/400 V4R2 system involves installing 5769-SS1 OS/400 V4R2 option 31, Domain Name System, and Client Access for Windows 95/NT (5763-XD1 V3R1M3) in your administrator’s workstation. Use Go LICPGM option 11, Install licensed programs to install the DNS OS/400 option. The installation program performs the following tasks: • Installs the product library QDNS, which includes the product’s objects (programs, message files, job descriptions, and so on). • Creates two IFS subdirectories: /QIBM/ProdData/OS400/DNS and /QIBM/UserData/OS400/DNS. • Creates the files, TEMPLATE and ROOT.FILE, in the /QIBM/ProdData/OS400/DNS subdirectory. TEMPLATE is used as a template to create all the DNS configuration files (BOOT, CACHE, and configuration files). ROOT.FILE holds information on root name servers needed to initialize the cache of Internet domain name servers. • Creates the ATTRIBUTE file and TMP directory under the /QIBM/UserData/OS400/DNS subdirectory. After the installation, you can proceed with the DNS server configuration using Operations Navigator. Figure 5 provides an overview of the AS/400 DNS server installation and configuration. 18 AS/400 TCP/IP DNS and DHCP Support Figure 5. AS/400 DNS Support Installation and Configuration Overview 2.3 DNS Server Jobs The DNS server jobs run in the QSYSWRK subsystem and they are: • QTOBDNS: This is the DNS server job. It starts with the job description QDNS/QTOBJOBD. DNS uses well-known port 53. DNS server messages are directed to the QTOBDNS job log. Use the Work with Spooled File (WRKSPLF) command for User QTCP to browse the DNS server job log. • QTOBXMIT: This is the zone transfer job that runs on the AS/400 system acting as the primary master name server for a specific domain. • QTOBXFER: This is the zone transfer job that runs on the AS/400 system acting as the secondary name server for a specific domain. Note: The BOOT file contains information that determines whether the DNS server should start as a primary or secondary name server for a specific domain. Remember, a single DNS server can be a primary and a secondary for one or more primary and secondary domains. 2.4 DNS Configuration Files All the DNS configuration files reside in the IFS directory /QIBM/UserData/OS400/DNS and they are: • Domain or forward mapping file (Domain_Name.DB): This file maps host names to IP addresses. The entries in this file are called resource records. This file has the same name as the domain with the .db extension. Install 5769-SS1 Opt.31 OS/400 - Domain Name System 1 QDNS library QTOBDNS (*PGM) QTOBH2N (*PGM) QTOBLKUP (*PGM) QTOBXMIT (*PGM) QTOBXFER (*PGM) QTOBMSGF (*MSGF) QTOBMSGF (*ALRTBL) : : : CHGDNSA (*CMD) QTOBJOBD (*JOBD) 2 Configure DNS Server with Operation Navigator ROOT.FILE TEMPLATE ATTRIBUTES BOOT CACHE domain-name.DB (one or multiple) *.in-addr.arpa.DB (multiple) PID QUERYLOG AS/400 DNS Server Implementation 19 • Reverse mapping files or (IP_address.in-addr.arpa.DB): These files map addresses back to host name. There is one file for each subnet address in the network where the domain’s hosts reside. • Loopback address file (0.0.127.in-addr.arpa.db): This covers the loopback network used by the hosts to direct traffic to themselves. • BOOT file (BOOT): This is the DNS server start-up file that ties all the DNS configuration files together. Figure 6 shows the relationship between the BOOT file and the *.db files. Figure 6. DNS Configuration Files Overview 2.4.1 Logging / Service Files The following files are used to log DNS server activity and for problem determination: • QUERYLOG: The DNS server logs every query in this file that it receives if it is configured to do so. To view the contents of the log, find it through Operations Navigator. The file name is QUERYLOG in the directory path FileSystems\Root\QIBM\UserData\OS400\DNS for your AS/400 system. Carefully consider whether you need to log all queries and for how long. There is no limit to the size of the log file. Once you turn it on, it remains on until you disable logging and re-boot the DNS server. Figure 7 shows how to specify that you want the DNS server to log all the queries it receives in the QUERYLOG file. directory /QIBM/UserData/OS400/DNS forwarders 10.5.69.208 options forward-only limit transfers-in 10 lim it transfers-per-ns 2 primary japan.private.m ycom pany.com japan.private.mycompany.com.DB primary 69.5.10.in-addr.arpa 69.5.10.in-addr.arpa.DB primary 0.0.127.in-addr.arpa 0.0.127.in-addr.arpa.DB primary 62.5.10.in-addr.arpa 62.5.10.in-addr.arpa.DB cache . CACHE 1 BOOT file 2 5 4 3 4 0.0.127.in-addr.arpa. IN SOA as5.japan.private.m ycom pany.com . postmaster.as5.private.mycompany.com. ( 887414083 10800 3600 604800 86400 ) ;AS400OPNAV_INFO NOREVMAPDOMAIN 0.0.127.in-addr.arpa. IN NS as5.japan.private.m ycom pany.com . 1.0.0.127.in-addr.arpa. IN PTR localhost. 62.5.10.in-addr.arpa. IN S OA as5.japan.private.m ycom pany.com . postmaster.as5.jpan.private.m ycom pany.com . ( 887414083 10800 3600 604800 86400 ) ;AS400OPNAV_INFO NOREVMAPDOMAIN 62.5.9.in-addr.arpa. IN NS as5.japan.private.m ycom pany.com . 116.62.5.9.in-addr.arpa. IN PTR hamadaj.japan.private.mycom pany.com. japan.private.m ycom pany.com . IN SO A as5.japan.private.m ycom pany.com . postm aster.as5.japan.private.m ycom pany.com . ( 887414083 10800 3600 604800 86400 ) ;AS400OPNAV_INFO REVMAPDOMAIN japan.private.m ycom pany.com . IN NS as5.japan.private.m ycom pany.com . japan.private.m ycom pany.com . IN MX 0 asm .japan.private.m ycom pany.com . as5.japan.private.mycompany.com. IN A 10.5.69.221 ;AS400OPNAV_INFO REVMAPHOST hamadaj.japan.private.mycompany.com. ham adaj.japan.private.m ycom pany.com . IN A 10.5.62.116 ;AS400OPN AV _IN FO R EVM A PHO ST asm .japan.private.m ycom pany.com . asm.japan.private.mycompany.com . IN A 10.5.69.212 69.5.10.in-addr.arpa. IN SOA as5.japan.private.m ycom pany.com . postm aster.as5.japan.private.m ycom pany.com . ( 887414083 10800 3600 604800 86400 ) ;AS400OPNAV_INFO NOREVMAPDOMAIN 69.5.10.in-addr.arpa. IN NS as5.japan.private.m ycom pany.com . 212.69.5.10.in-addr.arpa. IN PTR asm .japan.private.m ycom pany.com . 221.69.5.10.in-addr.arpa. IN PTR as1.japan.private.m ycom pany.com . 2 3 5 japan.private.mycom pany.com .DB 69.5.10.in-addr.arpa.DB 0.0.127.in-addr.arpa.DB 62.5.10.in-addr.arpa.DB 20 AS/400 TCP/IP DNS and DHCP Support Figure 7. Configuring DNS Server Logging - QUERYLOG • STATISTICS: Logs DNS server statistics. This summarizes the number of query hits the server received and the number of output packets it sent since the last time the server re-booted or reloaded its database. Delete this file when it becomes too large and you need to scroll down several times to find the information you are looking for. If you need to delete the file, you can find it through Operations Navigator. The file name is STATISTICS in the directory path FileSystems\Root\QIBM\UserData\OS400\DNS for your AS/400 system. Figure 8 shows how to display the DNS server statistics. Figure 8. Displaying DNS Server Statistics • DUMPDB: This file contains a dump of the DNS database for this server. You can use this database dump as a debugging tool to determine whether the DNS server is resolving IP addresses to host names correctly. You can match the contents of the database dump to the contents of a particular host’s AS/400 DNS Server Implementation 21 property pages. The database dump includes the DNS server’s authoritative data and cache data as well as information about its root servers. Figure 9 shows how to display the dump of the DNS server database. Monitor the size of this file to prevent it from growing too large. Figure 9. Displaying DNS Server Database • RUNDEBUG: This file logs any debugging information. You can use Operations Navigator to find this file in FileSystems\Root\QIBM\UserData\OS400\DNS for your AS/400 system. You must re-boot the server to have your changes take effect. Figure 10 shows how to specify the debug level. Figure 10. Specifying Debug Level A Debug level of zero means that no debug information is logged. Debug level 1 through 11 means logging will occur. Level 3 or greater will result in a lot of data. 22 AS/400 TCP/IP DNS and DHCP Support If you specify the Debug level other than zero, the system continuously appends information to RUNDEBUG until you re-boot the server again. Monitor the RUNDEBUG file often enough to ensure that it does not grow too large for your needs. Information continually appends to this file until you delete the file. • ATTRIBUTES: This file contains the DNS server version, debug level, and autostart attribute. • PID: This file contains a process ID and it is used for DNS to send signals for Dump Database, Dump Statistics, and Update Server. Figure 11 provides an overview of the DNS server jobs, files, and logs. Figure 11. DNS Server Jobs, Files, and Logs 1. Start the DNS server. 2. The boot file provides start-up information: location of configuration files, server role (primary and/or secondary for specific domains), CACHE file with root name servers data, if acting as a secondary name server, the IP address of the primary master server to transfer zone data from, forwarders information, and so on. 3. The DNS and zone transfer jobs start. 4. The name server is ready to answer queries. 5. DNS queries are logged in the QUERYLOG file if logging is turned on. 2.5 DNS Server User Interface This section describes the user interface available in AS/400 DNS server. 2.5.1 DNS Server Configuration through Operations Navigator The configuration of the AS/400 DNS server is through Operations Navigator. Operations Navigator provides the one and only configuration interface for the W ork w ith A ctive Jobs A S5 02/25/98 20:52:13 C PU % : .0 Elapsed tim e: 00:00:00 A ctive jobs: 187 T ype options, press Enter. 2=C hange 3=H old 4=End 5=W ork with 6=Release 7=Display m essage 8=W ork w ith spooled files 13=D isconnect ... O pt S ubsystem /Job U ser Type C PU % Function S tatus QS YS W RK QS YS S BS .0 DE QW QTN SM INV QTC P BC H .0 PG M -QYTCS NC 1 DE QW Q TOB DN S QTCP B C H .0 P GM -Q TOB D NS SE LW QTO BX FER QTCP B C H .5 PG M -Q TOB XM IT R U N QTODDHCPS QTCP BCH .0 PGM-QTODDSVR SELW QTP OP 00239 QTC P BC H 0 DEQ W QTP OP 00254 QTC P BC H 0 DEQ W QTOBDNS BOOT CACHE dom ain-name.DB (one or m ultiple) *.in-addr.arpa.DB (multiple) QUERYLOG secondarym ycom pany.com 10.5.69.222 m ycom pany.com .DB Zone Transfer JOB QTO BXM IT QTCP BCH .5 PGM -Q TO BXM IT RUN Primary DNS server(10.5.69.222) or STRTCPSVR *DNS 1 2 3 3 4 5 AS/400 DNS Server Implementation 23 DNS server. The Operations Navigator DNS Configuration Wizard provides a simple process for quickly getting an initial DNS server up and running. To start the DNS server configuration from Operations Navigator, select your AS400sytem name->Network->Server->OS400; the window in Figure 12 is shown. Figure 12. DNS Configuration Using Operations Navigator To use Operations Navigator, you need to install Client Access/400 for Windows 95/NT V3R1M3 in your administrator’s PC. Host servers must be started on your AS/400 system. Use the Start Host Server (STRHOSTSVR) command to start it. 2.5.2 Change DNS Attributes Command (CHGDNSA) Use the Change DNS Attributes (CHGDNSA) command to set the AUTOSTART attribute, which determines whether or not the DNS server starts automatically when TCP/IP is started using the STRTCP command. This attribute is ignored by the STRTCPSVR command. STRTCPSVR *DNS will start the DNS server regardless of the value of the AUTOSTART attribute. This attribute can be set from the Operations Navigator interface also. The CHGDNSA command allows you to set the debug level that can also be specified trough Operations Navigator. 2.5.3 Start TCP Server *DHCP Use the STRTCPSVR SERVER(*DNS) command to start the DNS server and the ENDTCPSVR SERVER(*DNS) command to stop it. You can also perform this function through Operations Navigator. 2.6 NSLOOKUP The AS/400 name server lookup (nslookup) program queries domain name servers in interactive mode. It allows you to query name servers for information about various hosts and domains, or to display a list of hosts in the domain. The syntax of the nslookup program for the AS/400 name server is: call pgm(qdns/qtoblkup) 24 AS/400 TCP/IP DNS and DHCP Support Refer to the TCP/IP Configuration and Reference, SC41-5420-01, for more information on nslookup. 2.7 Host Table Migration Program You can migrate AS/400 host name table entries to files that the Operations Navigator DNS configuration can maintain. The migration is a two-step process: • First, the program QDNS/QTOBH2N must convert the AS/400 host table entries you specify to DNS formatted files. • Second, you must convert each of the DNS formatted files created by the QDNS/QTOBH2N program to a format compatible with the Operations Navigator DNS configuration. The Operations Navigator Import Domain Data function does this conversion. Refer to TCP/IP Configuration and Reference, SC41-5420-01, for more information and usage examples of the host table migration program. 2.8 DNS Server Backup and Recovery Considerations Plan to back up the DNS server configuration files on a regular basis or every time the DNS administrator updates the DNS server configuration. • Use the SAV command to back up the DNS configuration files in the /QIBM/UserData/OS400/DNS IFS directory. The files in this directory are customer created DNS configuration files. These files must be backed up frequently as part of your regular backup plan. These files include: • BOOT • Primary domain files, both forward and reverse mapping. Be sure to include the 0.0.127.in-addr.arpa.DB reverse mapping file created by the Wizard. • CACHE (list of root servers) The files in /QIBM/UserData/OS400/DNS IFS subdirectory that should not be backed up and restored are DUMPDB, STATISTICS, RUNDBG, QUERYLOG, and any files in the TMP subdirectory. These files should be deleted when you no longer want them or they are too large. Backing up and restoring PID is probably of no use either unless the SAME server job is running before and after restore. When the Operations Navigator DNS server configuration creates a file in the /QIBM/UserData/OS400/DNS directory, the file is created with the Owner value set to the AS/400 user profile that you used to start the Client Access/400 connection. When this user profile is deleted with the parameter Owned object value *DLT, objects owned by the user profile are deleted also. In this case, any IFS DNS configuration files owned by this user profile are also deleted. To prevent accidentally deleting DNS server files, change the ownership of the files to a system type user profile such as QTCP. Tip © Copyright IBM Corp. 1998 25 Chapter 3. Implementing Primary and Secondary DNS Servers This chapter shows you how to get started implementing a DNS server on your AS/400 system. We take you step-by-step from your existing name resolution process based on the AS/400 host table to a full implementation of a primary name server and a secondary DNS server. Many companies have a simple internal network consisting of one or two subnets and use AS/400 host tables and PC client host tables to resolve TCP/IP host names to IP addresses. The disadvantage of this name resolution method is that every addition of a host may require an update to every client that needs to contact this new host. Configuring one AS/400 system to be a primary DNS server and a second AS/400 system to be a secondary (backup) name server alleviates this problem because adding or deleting a host and its IP address is done only once on the primary name server. The secondary name server automatically transfers the DNS files from the primary DNS server at pre-configured time intervals. This chapter concentrates on three sections: • How to configure the first DNS server (called the primary name server) in a small internal network by migrating the AS/400 host table. • What configuration changes need to be made to allow mail to be delivered to the one AS/400 mail server in the network. • How to configure a second DNS server (called a secondary name server) to act as a backup to the primary name server. The secondary name server can back up the primary server and balance the query workload. 3.1 Scenario Overview In this chapter, we use an example network of three subnets connected by routers as shown in Figure 13 on page 26. This network is not connected to the Internet and, consequently, does not have a firewall installed anywhere in the network. The network initially does not include a DNS server and relies on host tables to resolve host names to IP addresses. In this scenario, we configure a primary DNS server for the domain mycompany.com, a secondary DNS server to back up the primary domain server, and we go through the steps to configure the AS/400 mail server so POP3 mail can successfully be delivered to the mail server. Also in this scenario, we choose not to include the domain remote.com in the DNS configuration on AS1. Thus, the DNS on AS1 only includes information about the mycompany.com domain shown in Figure 13 on page 26. 26 AS/400 TCP/IP DNS and DHCP Support Figure 13. Scenario Network with One Primary Name Server and One Secondary Name Server 3.1.1 Scenario Objectives In this scenario, we have the following objectives: 1. Plan the primary domain. 2. Configure a primary DNS server on AS1. We migrate the AS1’s host table to create the initial DNS configuration on AS1. 3. Configure a mail server on AS1. This mail server serves as both an SMTP outgoing mail server and an incoming POP3 mail server for POP3 clients in the mycompany.com domain. 4. Configure a secondary DNS on the AS5 AS/400 system. This name server is used as a backup name server to the AS1 name server and to balance the query workload. 5. Review security options available on the primary name server. 6. Touch briefly on the reconfiguration of clients to use a DNS server instead of their own host table. 3.1.2 Scenario Advantages The advantages of this scenario are that: • It assumes the customer is coming from an environment that does not have a name server in the internal network. Thus, this scenario makes a good starting place for customers with little or no experience with name servers. • It discusses how an AS/400 host table can be migrated into the DNS server configuration, which can make the initial name server configuration go faster and smoother. • It outlines how to create a secondary name server to back up the primary name server. This prevents the primary name server from becoming a single point of failure in the area of name serving. Router otherserver.otherdomain.com mycompany.com Router as5 as1 as2 Subnet 1 p23gpb74 p23fym82 p23fzg16 Router rchserver2 rchserver3 .24 remote.com NTserver1 p2 Implementing Primary and Secondary DNS Servers 27 • Included in this chapter are steps for configuring the AS/400 system as a POP3 mail server now that a DNS server is in the network. • It addresses some security issues by explaining how to configure the primary name server to control which secondary name servers can zone transfer from it and which clients (based on IP address or IP network) can be blocked from accessing the data residing on it. 3.1.3 Scenario Disadvantages • This scenario describes how to configure a primary and secondary name server in a small internal network that does not have access to the Internet and does not have a Firewall installed in the network. This type of network and name server configuration do not meet the needs of network installations that require Internet access and a Firewall. • This scenario describes how to configure DNS servers for one domain: mycompany.com. Thus, all hosts included in this name server configuration must have the domain name of mycompany.com. • This scenario does not describe how to handle subdomains in the mycompany.com domain. Subdomains are covered in a subsequent chapter. 3.1.4 Scenario Network Configuration Figure 14. Scenario Network Diagram Diagram The network shown in Figure 14 consists of three subnets connected by routers. The network mycompany.com is an internal network and it is not connected to the Internet. The three subnets’ network IDs and subnet masks are as follows: • 10.5.69.192 subnet mask 255.255.255.192 • 10.5.62.0 subnet mask 255.255.255.0 • 10.117.33.0 subnet mask 255.255.255.0 The primary DNS will run on AS1. An AS/400 system’s DNS server can be configured for more than one primary domain and more than one secondary Router 10.117.32.0 mask:255.255.255.0 otherserver.otherdomain.com mycompany.com Router 10.5.69.192 mask:255.255.255.192 as5 as1 as2 .221 .211 .207 .222 10.5.62.0 mask:255.255.255.0 p23gpb74 p23fym82 p23fzg16 .58 .187 .169 Router rchserver2 rchserver3 .5 .24 remote.com .205 NTserver1 p23thkpl .20 28 AS/400 TCP/IP DNS and DHCP Support domain; however, in this scenario, the AS1 name server is configured to be primary for the one domain, mycompany.com. Because the mycompany.com domain includes the subnets 10.5.69.192 and the 10.5.62.0, AS1 is also configured with the primary domain files 69.5.10.in-addr.arpa and 62.5.10.in-addr.arpa. This chapter contains step-by-step instructions for configuring the AS1 primary name server. One mail server is configured for handling mail in the mycompany.com domain. This mail server also resides on AS1. However, it is not a requirement that the mail server be on the same AS/400 system as the DNS server. Lastly, the AS5 AS/400 system is configured to be a secondary DNS server for the domain mycompany.com. This means that the name server on AS5 will act as a backup to the name server on AS1. It contains the same information as the name server on AS1 but the information is in the form of secondary domain files rather than primary domain files. Secondary domain files contain information that was obtained through a zone transfer (an automatic transfer of information using TCP protocol) from the primary name server. 3.2 Task Summary The tasks required to complete this scenario do not include the initial TCP configuration on the AS/400 system such as creating a line description, creating an IP interface, creating a TCPIP route, starting TCP/IP, and so on. This scenario assumes that the TCP configuration on both AS/400 systems in the network and all other hosts in the network is completed and TCP/IP connectivity has been verified. The summary of tasks for this scenario are as follows: 1. Plan the primary domain mycompany.com. 2. Create the DNS primary name server on As1 using the following substeps: • Prepare the local host table for migration on AS1. • Migrate the AS1 host table to DNS formatted files. An AS/400 program is used to do this. • Use Operations Navigator Import domain data to migrate the DNS formatted files to files that can be maintained by the Operations Navigator DNS configuration. • Use Operations Navigator DNS server configuration to make final and ongoing configuration changes, if necessary. 3. Configure AS1 as a mail server. This task is divided into the following substeps: • Configure POP3 users. • Configure POP3 clients. • Configure the domain’s mail server in the primary DNS. • Verify the TCP/IP and SMTP configurations. • Start the mail jobs in the mail server. 4. Start the DNS server on AS1. 5. Verify that the DNS is operational. 6. Create the DNS secondary name server on AS5. 7. Review security options on the primary name server AS1 8. Reconfigure clients to use a DNS server instead of host tables. Implementing Primary and Secondary DNS Servers 29 3.2.1 Planning the Primary Domain The first step in moving from host tables to a name server is to determine what domain will become the primary domain. In this scenario, mycompany.com is the primary domain on the AS1 name server. Figure 14 on page 27 indicates, by a dotted line box, which hosts are to be included in the domain mycompany.com. All the hosts but one on the 10.5.69.192 network are included in this domain. The host OTHERSERVER, athough it is on the 10.69.192 network, is in the domain OTHERDOMAIN.com; thus, it is not part of the mycompany.com domain. All the hosts on the 10.5.62.0 network are included in this domain, but no host on the 10.117.32 network is included in the mycompany.com domain because the hosts on this network are part of remote.com domain. In other words, the hosts located in the remote.com domain are excluded in the migration. Consequently, the AS1 name server is unaware of the remote.com and its hosts. It is assumed that remote.com will continue to use host tables as the method of resolving names to IP addresses. Thus, in this chapter, the 10.117.32.0 network is not included in the migration but both networks 10.5.59.192 and 10.5.62.0 are included. The specific host OTHERSERVER is excluded from the migration because it belongs in another domain of OTHERDOMAIN.com even though it is part of the 10.5.69.192 network as indicated in Figure 15 on page 31. During the planning phase, it is important to verify that the clients that you, as the administrator, have decided belong in mycompany.com are configured with a domain name of mycompany.com. For example, assume that two NetWare servers are located in the 10.5.62.0 network and are configured with host and domain names of nw1.payroll and nw2.payroll. You decide to include them in the DNS configuration on AS1 so their domain names must be changed from payroll to mycompany.com. In other words, in this scenario, every host that is included in AS1’s name server configuration must have a domain name of mycompany.com. Chapter 5, “Growing Your Domain: Creating Subdomains” on page 83 discusses the situation where a subdomain is used in a network and needs to be included in the mycompany.com domain. However, the scenario in this chapter assumes all hosts included in the AS1 name server have the domain name of mycompany.com without any subdomain names used. 3.2.2 Creating the Primary Name Server on As1 We divided the task of creating a primary name server into several subtasks. The following sections discuss each of the steps that we follow to configure AS1 as a primary name server. 3.2.2.1 Preparing the Host Table for Migration Although this chapter uses a migration of an AS/400 host table as a method to configure the first name server, it is not a requirement that this method be used. It is possible to use Operations Navigator to configure DNS from the beginning. However, since AS1’s host table contains the host names and IP addresses of the hosts to be included in the mycompany.com domain, the host table migration method saves time, typing, and, consequently, it helps to avoid the possiblity of introducing typing errors into the DNS configuration. 30 AS/400 TCP/IP DNS and DHCP Support Since AS1’s host table is used as a starting point for the migration, it is important to "clean up" this host table: 1. Delete any hosts from the table that no longer exist in the network. 2. Make sure all hosts in the mycompany.com domain are listed in AS1’s Host table. 3. Check for incorrect IP addresses and typing mistakes in the AS1 host table names. 4. Verify that the hosts listed in the client’s host tables are included in the AS1’s host table. 5. Check for all host names in the host table with domain names other than mycompany.com. Do these hosts belong in another domain as listed or should they be included in the mycompany.com domain? If they do belong in the mycompany.com domain, now is the time to change the domain name on the host itself to mycompany.com and update AS1’s Host table to reflect the change. However, when changing the domain name of a host, be aware of the impact the change can have on the clients that possibly use this host as a server. If the host you are changing the domain name of is a mail server, the domain name change can have a wide-spread effect. You must also plan for the hosts that are not included in the migration. Note Figure 14 on page 27 specifies three hosts in the network that are not included in mycompany.com domain. The "future" DNS server on As1 with one primary domain of mycompany.com will not resolve queries for host OTHERSERVER, nor will it resolve queries for Rchserver2 and Rchserver3. If the As1 system is the only host that needs to access these systems, leaving their host names/IP addresses in the As1 host table may be sufficient since an AS/400 system can be configured to check its local host table first, and if the answer is not in the table, then query the DNS server. But if other clients need access to OTHERSERVER, Rchserver2, or Rchserver3, you need to decide how the clients will resolve those hosts names. For example, consider host OTHERSERVER in the domain OTHERDOMAIN.com. It is a good idea to review the AS1’s host table at this time and determine if this host really needs to belong in a domain of OTHERDOMAIN.com or if it can belong in the mycompany.com domain. If it can be included in the mycompany.com domain, now is a good time to change its domain name and change AS1’s host table to also reflect this change so OTHERSERVER can be included in the migration. For purposes of illustrating the example of excluding a host from the migration, consider OTHERSERVER as part of OTHERDOMAIN.com and the migration will exclude this host. Consider the situation with hosts Rchserver2 and Rchserver3. The AS1 host table shown in Figure 15 on page 31 indicates these two hosts are part of the remote.com domain. If the DNS server running on AS1 really needs to resolve DNS queries for these hosts, then a second primary domain of remote.com on AS1 DNS server can be created and configured with Operations Navigator. In this case, AS1 has a DNS server running on it and is responsible for two primary domains: mycompany.com and remote.com. This scenario only concentrates on creating the primary domain of mycompany.com on AS1 and the secondary domain on AS5 for the same domain, mycompany.com. Thus in this scenario, the remote.com domain is excluded. But be aware that it is possible to create additional primary domains and secondary domains if the domain naming scheme and the network require it. Implementing Primary and Secondary DNS Servers 31 Figure 15. AS1 Host Table 3.2.2.2 Migrating the AS/400 Host Table to DNS Formatted Files The AS/400 program used to migrate the AS/400 host table to DNS formatted files is called QTOBH2N. There are several options that can be used with this program. A complete list of options is described in the DNS chapter of the TCP/IP Configuration and Reference, SC41-5420-01. In this scenario, we cover only the options used to migrate the AS1 host table in Figure 15. This migration step is one of the few DNS configuration steps that is executed from an AS/400 "green screen". The following steps migrate the AS1 host table to DNS formatted files: • Make sure the AS1 host table is cleaned up and accurate. • Add library QDNS to the user’s library list with the AS/400 command addlibl lib(QDNS). • Grant the user profile that will run the program QDNS/QTOBH2N *ALLOBJ special authority. • Change the job Coded Character Set ID (CCSID) for the user job that will run the program QDNS/QTOBH2N to 37. Be sure to record the original job coded character set ID so that you can change it back. Change the CCSID just before you run the program QDNS/QTOBH2N. Change the CCSID back immediately after you run this program. Work with TCP/IP Host Table Entries System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 7=Rename Internet Host Opt Address Name 10.5.62.58 p23fzg16 p23fzg16.mycompany.com 10.5.62.169 p23fym82 p23fym82.mycompany.com 10.5.62.187 p23gpb74 p23gpb74.mycompany.com 10.5.69.204 p23thkp1 p23thkp1.mycompany.com 10.5.69.205 NTserver1 NTserver1.mycompany.com 10.5.69.207 otherserver otherserver.otherdomain.com 10.5.69.211 as2 as2.mycompany.com 10.5.69.221 as5 as5.mycompany.com 10.5.69.222 as1 as1.mycompany.com 10.117.32.5 Rchserver3 Rchserver3.Remote.com 10.117.33.24 Rchserver2 Rchserver2.Remote.com 127.0.0.1 LOOPBACK LOCALHOST 32 AS/400 TCP/IP DNS and DHCP Support • To change the user job’s CCSID: 1. Enter the AS/400 CHGJOB command. 2. Press F4 to prompt. 3. Press F10 to select additional parameters. 4. Page Down twice to the parameter Coded Character Set ID. 5. Record the current value for Coded Character Set ID. 6. Change the coded character set ID to 37. 7. Press Enter. The program QTOBH2N will migrate the AS/400 host table to DNS formatted files. On the AS1 AS/400 system, issue the command: call pgm(qdns/qtobh2n) parm('-d' 'mycompany.com' '-n' '10.5.62:255.255.255.0' '-n' '10.5.69:255.255.255.0' '-e' 'otherdomain.com' '-M') For this chapter’s example, the preceding program creates three files: h2n.mycompany, h2n.10.5.62, and h2n.10.5.69. After the command completes, the job log contains message DNS0417: Process completed successfully, file h2n.mycompany built in directory Although the message does not refer to the h2n.10.5.62 and the h2n.10.5.69 files, it implies that these two files were also successfully created. Tip: At this point, it is important to remember to change the CCSID on the user’s job back to what it was before it was changed to 37. Set the DFTCCSID first and then the CCSID. Both should be set to the same value. The options used to run the QTOBH2N program specified the particulars of how the host table should be migrated. An explanation of the options used is as follows: 1. The ’-d’ ’mycompany.com’ option indicates the domain that the name server is primary for is mycompany.com. 2. The ’-n’ ’10.5.62:255.255.255.0’ and ’-n’ ’10.5.69.69.255.255.255.0’ indicates that hosts listed in the AS/400 system’s host table with IP addresses included in the networks 10.5.62 and the 10.5.69 are included in the migration. 3. Any hosts in the preceding two networks that are in the domain OTHERDOMAIN.com are not included in the migration. 4. The migration does not create any MX records because the -M option was used. If the ’-M’ option is not used, an MX record is created for EVERY host included in the migration. In this scenerio, an MX record for every host is not necessary. There is only one mail server (AS1 host) in this scenario and one domain (mycompany.com). We need only one MX record and it is added later using Operations Navigator. Note 1: The -e option needs further explaining. Remember, every host that is included in the migration is included in the mycompany.com domain. If the Note that changing the CCSID back may not be a simple task because of the interaction of DFTCCSID and CCSID when the CCSID is set to 65535. It may be better to run the host table migration program from a batch job. Attempting to change back the CCSID may leave the keyboard in an unusable state in some countries (for example, Japan). Note Implementing Primary and Secondary DNS Servers 33 OTHERSERVER host is not excluded with the -e option, then OTHERSERVER is migrated with a domain of OTHERDOMAIN.com.mycompany.com. Even if OTHERDOMAIN is a subdomain of mycompany.com, the absolute domain name of OTHERDOMAIN.com.mycompany.com is not correct. Making OTHERDOMAIN a subdomain of mycompany.com is discussed in Chapter 5. Note 2: Rchserver2 and Rchserver3 are not included in the migration by default and it is not necessary to eliminate them explicitly with the ’-e’ option. This is because only the hosts residing in the networks specified with the ’-n’ options are included in the migration. Because Rchserver2 and Rchserver3 reside on the 10.117.32.0 network, they are not included in the migration. Note 3: Hosts AS1, NTserver1, AS2, AS5, p23thkp1,and OTHERSERVER have subnet masks of 255.255.255.192. These hosts are in the 10.5.69.192 network. The migration program does not handle subnetting into the fourth octect. Thus, if AS1’s host table did include hosts in the 10.5.69.0 (the 10.5.69.64 or the 10.5.69.128 networks), the migration program includes these hosts whether we want to include them in the migration or not. In this scenario, the migration program creates three files in the /QIBM/UserData/OS400/DNS directory: h2n.mycompany h2n.10.5.62 h2n.10.5.69 At this point, you may want to verify that these files are in the /QIBM/UserData/OS400/DNS directory. You may use the AS/400 command: wrklnk '/QIBM/UserData/OS400/DNS' Then use option 5 to view the next level of the DNS directory: The three h2n files were created by the QTOBH2N program. The ATTRIBUTES file and the TMP directory existed before the QTOBH2N program was run. They were automatically created when you installed the OS/400 Domain Name System option 31. To view the contents of h2n.mycompany, you can use Operations Navigator: Work with Object Links Directory . . . . : /QIBM/UserData/OS400/DNS Type options, press Enter. 3=Copy 4=Remove 5=Next level 7=Rename 8=Display attributes 11=Change current directory ... Opt Object link Type Attribute Text h2n.mycompany STMF h2n.10.5.62 STMF h2n.10.5.69 STMF ATTRIBUTES STMF TMP DIR 34 AS/400 TCP/IP DNS and DHCP Support • Click + next to as1.mycompany.com. • Click + next to File System. • Click + next to root. • Click + next to QIBM. • Click + next to UserData. • Click + next to OS400. • Click + next to DNS. Figure 16. Viewing the Contents of the DNS Directory with Operations Navigator Note that Figure 16 shows similar information as the WRKLNK command. However, double-clicking on h2n.mycompany.com brings up an Open with window that allows you to choose your favorite program to view the content of the DNS files. We chose Netscape to browse the h2n.mycompany file shown in Figure 17. Implementing Primary and Secondary DNS Servers 35 Figure 17. Viewing theContents of h2n.mycompany File with Netscape Figure 18 and Figure 19 show the contents of h2n.10.5.69 and h2n.10.5.62 browsed with Netscape. Figure 18. Viewing the h2n.10.5.69 File with Netscape. 36 AS/400 TCP/IP DNS and DHCP Support Figure 19. Viewing h2n.10.5.62 with Netscape 3.2.2.3 Importing DNS Formatted Files to Operations Navigator Once the AS/400 host table has been migrated to DNS formatted files using the QTOBH2N program, it is time to migrate the DNS formatted files to Operations Navigator DNS files. The Operations Navigator DNS Configuration Import Domain Function accomplishes this step. From a Client Access Windows 95 client, bring up Operations Navigator and follow these instructions: • Click + next to As1.mycompany.com. • Click + next to Network. • Click + next to Servers. • Click + next to OS400. Figure 20. Contents of OS/400 Servers Double-clicking DNS brings up the DNS server configuration wizard. The wizard automatically starts when you enter the DNS configuration for the first time. Implementing Primary and Secondary DNS Servers 37 Figure 21. Welcome Window to the DNS Configuration Wizard Click Next. The next wizard window allows you to Add IP addresses for Root Servers. This chapter’s scenario does not make use of Root Servers. Click Next to bypass the Root Server window. Figure 22. Choosing the Domain Type in the DNS Server Configuration Wizard AS1 is the primary domain server for the domain mycompany.com. Thus, take the default of primary domain server shown in Figure 22 and click Next. 38 AS/400 TCP/IP DNS and DHCP Support Figure 23. DNS Server Configuration Wizard Default Domain Name Enter the primary domain name, mycompany.com (see Figure 23). Click Next. Figure 24. Enter the Host Name and IP Address for Loopback The next window presented by the wizard allows you to add IP addresses and host names. We need to add only one special host called localhost, which exists for the loopback address of 127.0.0.1. See Figure 24. Click Add. Enter localhost for the Host name. Enter 127.0.0.1 for the IP address. Click OK. Implementing Primary and Secondary DNS Servers 39 The remaining IP addresses and host names are imported from the h2n files. Figure 25. DNS Wizard Host Name/IP Address List Click Finish to exit the wizard. See Figure 25. At this point, Operations Navigator displays the DNS server as1.mycompany.com. Double-click on the Primary Domain to view the files that the wizard created. See Figure 26. Figure 26. Contents of DNS Server on the As1 System after Wizard Completes The Import Domain function that we are running next attempts to create a mycompany.com file. Thus at this point, you need to delete the existing mycompany.com file that the wizard automatically created: 1. Right click on mycompany.com. 2. Click Delete. 3. Click Yes to confirm. Do not delete the 0.0.127.in-addr.arpa file. To save your configuration and write the files to the IFS directory, you need to close the DNS window at this time. 40 AS/400 TCP/IP DNS and DHCP Support From the list of OS400 servers, double-click DNS. This time, you are not taken into the DNS configuration wizard; the DNS configuration graphical interface is displayed. Right click on the Primary Domains to get a pop-up window. See Figure 27. Figure 27. Right click on Primary Domain Select Import domain data. A window is shown containing a default path of /QIBM/UserData/OS400/DNS. Add the file you want to import to the path. In this case, the first file to be imported is h2n.mycompany. See Figure 28. Click OK. Figure 28. Importing h2n.mycompany.com Using the Import Domain Data Function A new file, mycompany.com, is created under Primary Domains. Repeat Import Domain Data for every h2n file that the QTOBH2N program created. Thus, repeat the Import Domain Data steps two more times for the remaining two h2n migration files: h2n.10.5.62 and h2n.10.5.69. In summary, for this chapter’s scenario, we ran the Import Domain Data function three times. Double-click Primary Domains in the Operations Navigator DNS server configuration. At this time, four files are shown in Figure 29. Implementing Primary and Secondary DNS Servers 41 Note: If the h2n file does not exist but is entered in the Import Domain Data field, no error message is sent to the user. Figure 29. Results of Running Import Domain Function Against All H2n Migration Files The four files contained in AS1’s primary domain are the files the DNS server requires to answer queries for the mycompany.com domain with the exception of a query for a mail server. An MX record is added later in this chapter to satisfy that requirement. Note that the three files, 62.5.10.in-addr.arpa, 69.5.10.in-addr.arpa, and mycompany.com shown in Figure 29, have an icon to the left of the file names that appears to be "hashed". This indicates that the domain is currently Disabled. The DNS server will not load a disabled domain. A disabled domain is like a "sand-box"; a domain can be created without making it live. To enable each domain, right click on each file name and select Enable. Close the Operations Navigator DNS server configuration window to save the DNS configuration. The migration of the AS/400 host table is completed. However, there are a few more DNS configuration changes that are best accomplished with the Operations Navigator DNS server configuration. We discuss these changes in the next section. 3.2.2.4 Additional DNS Configuration with Operations Navigator Once the migration of the host table is complete, any additional configuration changes can be made using the Operations Navigator DNS server configuration. Automatically Create/Delete Reverse Mapping Entries At this point, change the configuration to automatically create/delete a reverse mapping entry for every forward mapping entry that is added. Note: A forward mapping entry is also called an A or address record, which is contained in the forward mapping primary domain file. This entry is created by adding a new host to the mycompany.com primary domain file. Forward mapping is a host name to IP address mapping. The reason we make this configuration change can best be explained by an example: 42 AS/400 TCP/IP DNS and DHCP Support If a new host named newhost is added to the 10.5.69.192 network with an IP address of 10.5.69.206, the DNS administrator must add a host to the mycompany.com forward mapping primary domain file. This entry allows the DNS server to answer a query for a client who sends the IP address to the DNS server and requests that the DNS server give it the host name for the IP address. If the DNS administrator forgets to add the same new host to the 69.5.10.in-addr.arpa domain, the DNS server cannot answer a query from a client that sends the IP address of 10.5.69.206 and requests its host name. This type of query is sometimes called a "reverse look up". Consequently, another name for the 69.5.10.in-addr.arpa file is reverse mapping file for the 10.5.69 network. By configuring the DNS server to automatically create and delete the reverse mapping files, a DNS administrator only has to enter the new host into the forward mapping file: mycompany.com. The matching entry is automatically added in the appropriate reverse mapping file by Operations Navigator. There are few situations where a DNS administrator wants a host entered into the forward mapping file but not entered into the reverse mapping file. Thus, we recommend this configuration change; it can save time and help prevent mistakes when manually adding new hosts to the primary domain. To make this configuration change, do the following steps: 1. Right click on the file mycompany.com. 2. Select Properties. 3. Check Create and delete reverse mappings by default. See Figure 30. 4. Click OK. 5. Close the Operations Navigator DNS server configuration to save the configuration changes. Figure 30. Enable Create and Delete Reverse Mapping by Default for Domain mycompany.com Reviewing the Primary Domain Files on As1 Name Server Let’s review the contents of each primary domain file on AS1: 1. Double-click DNS. 2. Double-click DNS Server- as1.mycompany.com. Implementing Primary and Secondary DNS Servers 43 3. Double-click Primary Domains. 4. Double-click the forward mapping file mycompany.com. Figure 31 shows the contents of mycompany.com forward mapping primary domain file. Figure 31. Contents of Mycompany.com Primary Domain File 5. Double-click the 62.5.10.in-addr.arpa primary domain file to view the contents of the reverse mapping primary domain file for the 10.5.62 network shown in Figure 32. Figure 32. Contents of the 62.5.10.in-addr.arpa Primary Domain 6. Double-click the 69.5.10.in-addr.arpa primary domain file to view the contents of the reverse mapping file for the 10.5.69 network shown in Figure 33. Figure 33. Contents of the 69.5.10.in-addr.arpa Primary Domain 44 AS/400 TCP/IP DNS and DHCP Support The 0.0.127.in-addr.arpa domain was created by the DNS Configuration Wizard. Figure 34 shows the contents of this primary domain file. Note that the host localhost is contained in the mycompany.com forward mapping file shown in Figure 31 on page 43. You can think of the host localhost as the host that AS1 uses to "talk to itself". This host is a requirement; thus, it is a host that should immediately be added with the Operations Navigator DNS configuration wizard when initially configuring the name server. Figure 34. Contents of the Loopback Primary Domain At this point, we finished the configuration of mycompany.com primary DNS server by configuring one forward mapping file (mycompany.com) and two reverse mapping files, 62.5.10.in-addr.arpa and 69.5.10.in-addr.arpa. All three files are primary domain files. The wizard created a BOOT file, CACHE file that contains the root name servers, and the 0.0.127.in-addr.arpa reverse mapping file automatically. The directives in the BOOT file are created through Operations Navigator. Note that you cannot view the BOOT and CACHE files from Operations Navigator’s DNS configuration windows. However, they are located in the IFS directory: /QIBM/UserData/OS400/DNS and can be viewed with Operations Navigator: 1. Double-click the AS/400 system where the DNS server is running. 2. Click + next to File Systems. 3. Click + next to Root -> QIBM -> UserData -> OS400 ->. 4. Double-click DNS. 5. Double-click BOOT or CACHE file and choose the program you want to use to view the file. In later chapters, we say that a name server "caches" information it receives from another name server. This is a way a name server "remembers" information so if it receives a query from a client for the same host, it can respond with an answer from its cache and not query the authoritative name server again. It is important to understand that this cached information is not contained in the /QIBM/UserData/OS400/DNS/CACHE file. The CACHE file contains information about root servers. This scenario does not require the use of root servers; thus, the CACHE file in this scenario should remain empty. 3.2.3 Configuring AS1 as a Mail Server In this scenario, the AS1 AS/400 system is the only mail server for the mycompany.com domain. The DNS server running on AS1 needs to know this Implementing Primary and Secondary DNS Servers 45 since it receives queries from clients attempting to learn the IP address for the mail server that accepts mail for users in the domain mycompany.com. Also, in this scenario we made a decision to let users use SMTP domain names of mycompany.com as the domain in the mail’s destination when addressing their mail. The following example explains this further: UserA in domain mycompany.com wants to send mail to Tim Jones who is also a POP3 client in the mycompany.com domain. UserA sends mail from the POP3 client to the e-mail address of Tim@mycompany.com, which should be delivered to Tim’s POP3 server on AS1. In the following sections, we show how to configure the POP3 directory entry for Tim on AS1, how to configure the SMTP server on AS1, and how to configure the DNS server on AS1. The assumptions for this scenario: • The outgoing SMTP server for UserA’s client is AS1. • The incoming POP3 server for Tim’s client is AS1. • mycompany.com does not have access to the Internet for the purpose of exchanging mail with Internet users. • There is no firewall in the mycompany.com network. • AS1 is the only mail server for all mycompany.com domain. • Tim’s PC where the POP3 client resides is configured to use AS1 as its DNS server. 3.2.3.1 Configuring a POP3 User on AS1 The user Tim needs to have a user profile and a POP3 directory entry on AS1. Tim’s user profile is JONEST2. We need to add an entry in the system distribution directory for the POP3 user. Use the Add Directory Entry (ADDDIRE) command shown in Figure 35 and press Enter. The easiest way to configure mail in an internal network is to use an SMTP domain name of . Mail should be addressed to user@AS1.mycompany.com, where AS1 is the host name of the mail server. However, most users do not want to have to remember the host name as part of the SMTP domain name when addressing mail. Thus, in this scenario, we show the configuration to handle both situations: when a user addresses mail to: user@mycompany.com, and when the user addresses mail to the same user as \user@AS1.mycompany.com. In both cases, mail is delivered to the AS1 mail server. Tip 46 AS/400 TCP/IP DNS and DHCP Support Figure 35. Adding an Entry in the System Distribution Directory for User JONEST2 We now change the newly created directory entry to configure the user as a POP3 user. To change the directory entry, enter the following AS/400 command: WRKDIRE Press F17 to position to the JONEST2 directory entry. Use option 2 to Change JONEST2 directory entry. Once into the Change Directory Entry display, page down four times until you get to the portion of the directory entry that contains the parameters Mail service level and Preferred address. A POP3 directory entry must have a mail service level = 2 (System message store) and a Preferred address = 3 (SMTP name). See Figure 36. Figure 36. Mail Service Level and Preferred Address Values in POP3 Directory Entry Add Directory Entry (ADDDIRE) Type choices, press Enter. User identifier: User ID . . . . . . . . . . . jonest2 Character value Address . . . . . . . . . . . as1 Character value User description . . . . . . . . Tim Jones' POP directory entry User profile . . . . . . . . . . jonest2 Name, *NONE System name: System name . . . . . . . . . *LCL Character value, *LCL, System group . . . . . . . . . Character value Network user ID . . . . . . . . *USRID Change Directory Entry User ID/Address . . . . : JONEST2 AS1 Type changes, press Enter. Mail service level . . 2 1=User index 2=System message store 4=Lotus Domino 9=Other mail service For choice 9=Other mail service: Field name . . . . F4 for list Preferred address . . . 3 1=User ID/Address 2=O/R name 3=SMTP name 9=Other preferred address Address type . . . . F4 for list For choice 9=Other preferred address: Field name . . . . F4 for list More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F18=Display location details F19=Change name for SMTP F20=Specify user-defined fields F24=More keys Implementing Primary and Secondary DNS Servers 47 Press F19 to enter Tim’s SMTP user ID and SMTP domain name in the Change Name for SMTP display. Press Enter to confirm that you want to add an SMTP userid and SMTP Domain name for this directory entry. Type in: SMTP user ID = tim SMTP domain = AS1.mycompany.com See Figure 37. Press Enter twice to confirm. Figure 37. Adding SMTP UserId and SMTP Domain Name for User JONEST2 3.2.3.2 Configuring POP3 Clients First, let’s summarize: Tim now has a POP3 directory entry on AS1. You can think of this as representing Tim’s POP3 mailbox. Mail sent to Tim@mycompany.com is delivered to this mailbox until the user Tim takes the option to "Get Mail" from the POP3 client. Tim’s SMTP User ID is tim and his SMTP domain name is AS1.mycompany.com. Another user in mycompany.com can send mail to Tim by addressing mail to tim@mycompany.com. Tim must configure his POP3 client (running on his PC, for example, Netscape mail) with a POP3 User Name that matches the POP3 directory entry User profile (JONEST2 in our example). Of course if you want to make your life easier, you can use the same name for User ID and SMTP user ID. Figure 38 shows the configuration for the POP3 mail client in the Netscape browser. Notice that AS1.mycompany.com is both an outgoing mail SMTP server Although users can address mail to Tim using the ’Mail To’ of: tim@mycompany.com (we will show you how to configure it shortly), the SMTP domain name must still be AS1.mycompany.com. Important Add Name for SMTP System: AS1 Type choices, press Enter. User ID . . . . . . . . : JONEST2 Address . . . . . . . . : AS1 SMTP user ID . . . . . . tim SMTP domain . . . . . . . as1.mycompany.com SMTP route . . . . . . . 48 AS/400 TCP/IP DNS and DHCP Support and an incoming mail POP3 server. The POP3 User Name matches the User profile in the AS/400 system distribution directory entry. Figure 38. Specifying the Mail Server Option to the Netscape POP3 Mail Client 3.2.3.3 Configuring the Domain’s Mail Server in the DNS Server There must be an A record for the domain’s mail server (also called mail exchanger) in the forward mapping primary domain file, mycompany.com, in the reverse mapping primary domain file 69.5.10.in-addr.arpa. In our scenario, AS1 already has an A record in both files as shown in Figure 31 on page 43 and Figure 33 on page 43. To tell the name server that AS1 is the mail server for the domain, we need to add an MX record to the mycompany.com primary domain file. We use a wildcard MX record for this. The following steps show how to configure a wildcard MX record: 1. Right click on the mycompany.com primary domain. 2. Select Properties. 3. Select Mail Tab. 4. Click Add. 5. Take the default domain (*.mycompany.com.) and click OK. See Figure 39. Implementing Primary and Secondary DNS Servers 49 . Figure 39. Configuring the DNS Primary Domain Mycompany.com’s Mail Server 6. Enter the host name of the mail server (in this case, as1, as shown in Figure 40). 7. Click on OK. The result is shown in Figure 41. 8. Click on OK a second time to exit out of the Properties window. 9. Close the DNS window to Save the DNS Configuration. Figure 40. Entering the Mail Server’s Host Name Note: From the Operations Navigator DNS server configuration, the only way to ensure that you entered an MX record is to display the Properties of the 50 AS/400 TCP/IP DNS and DHCP Support mycompany.com domain and review the contents of the Mail tab. To view the actual MX record, you can use the Operations Navigator File System to display the mycompany.com.DB file contained in the /QIBM/UserData/OS400/DNS path. Figure 41. Result of Wildcard MX Record Added to mycompany.com Primary Domain File If an MX query is sent to the name server for a host that does have an A record configured on the name server, the wildcard MX record is not used. The name server sends a negative response. However, for SMTP and mail, this is OK because after receiving a negative response for the MX query, the SMTP code sends an A record query for that host to the name server. Since an A record exists for that host, the name server sends a positive response to the A record query and SMTP attempts to send the mail to that host’s IP address. It is not a requirement that the mail server and the DNS server be the same AS/400 system. 3.2.3.4 Verifying the TCP/IP and SMTP Configuration on AS1 We should verify the TCP/IP configuration parameters relevant to mail as well as the SMTP configuration on the mail server (AS1). Use the Change SMTP Attributes (CHGSMTPA) command to verify the SMTP configuration. Figure 42 shows the SMTP attributes in AS1. Since this network is not connected to the Internet and there is no firewall installed in the network, we want to confirm that the parameter Mail Router = *NONE and the parameter Firewall = *NO. The MX record of domain name of *.mycompany.com is called a wildcard MX record. If a client sends an MX record query to the name server for a host in domain mycompany.com and the name server does not have an A record for that host, then the name server sends a response stating that AS1.mycompany.com is the mail server for the domain mycompany.com. Note Implementing Primary and Secondary DNS Servers 51 Figure 42. AS1 SMTP Attributes Verify the TCP/IP domain configuration. Use the Configure TCP command: CFGTCP option 12. Verify the host name (AS1) and domain name (mycompany.com). Make sure that the Search First parameter is *LOCAL and the Internet Address is the IP address of AS1’s DNS server, which is 10.5.69.222. This ensures that when the SMTP server on AS1 attempts to deliver mail, it checks the AS/400 host table first and then the DNS server at 10.5.69.222 to resolve host names. We need the local host table searched first because it contains the alias mycompany.com for AS1, which the SMTP server needs to find. See Figure 43. Figure 43. CFGTCP Opt 12 on AS1 System The last TCP/IP configuration that we need to verify on AS1 is the local host table. Make sure that mycompany.com. is listed as an alias to AS1 in the host table. Notice in Figure 44 that mycompany.com is another host name for the IP interface of 10.5.69.222. This alias listed in the host table combined with the Change SMTP Attributes (CHGSMTPA) Type choices, press Enter. Mail router . . . . . . . . . . *NONE Coded character set identifier 00819 1-65533, *SAME, *DFT Mapping tables: Outgoing EBCDIC/ASCII table . *CCSID Name, *SAME, *CCSID, *DFT Library . . . . . . . . . . Name, *LIBL, *CURLIB Incoming ASCII/EBCDIC table . *CCSID Name, *SAME, *CCSID, *DFT Library . . . . . . . . . . Name, *LIBL, *CURLIB Firewall . . . . . . . . . . . . *NO *YES, *NO, *SAME Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . 'AS1' Domain name . . . . . . . . . . 'mycompany.com' Host name search priority . . . *LOCAL *REMOTE, *LOCAL, *SAME Internet address . . . . . . . '10.5.69.222' 52 AS/400 TCP/IP DNS and DHCP Support Search First = *LOCAL (from CFGTCP opt 12) allows mail addressed to user@mycompany.com to be delivered to the AS1 mail server. Mail addressed to user@AS1.mycompany.com is also delivered to the AS1 with this configuration. Figure 44. Configuring mycompany.com as an ALIAS to AS1 and as1mycompany.com 3.2.3.5 Starting Mail Jobs on the Mail Server (AS1) To start SMTP, POP3, and the mail server framework jobs, issue the following commands: •strtcpsvr *smtp •strtcpsvr *pop •strmsf If some of these jobs fail and cancel, you need to check the job logs for errors. See Section 8.1.10.4, “SMTP and POP Servers” on page 203. 3.2.4 Starting the DNS Server on AS1 To start the DNS server on AS1: 1. Close the DNS window in Operations Navigator. 2. From the OS400 Server list, right click DNS. 3. Click Start. Work with TCP/IP Host Table Entries System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 7=Rename Internet Host Opt Address Name 10.5.62.58 p23fzg16 p23fzg16.mycompany.com 10.5.62.169 p23fym82 p23fym82.mycompany.com 10.5.62.187 p23gpb74 p23gpb74.mycompany.com 10.5.69.204 p23thkp1 p23thkp1.mycompany.com 10.5.69.205 NTserver1 NTserver1.mycompany.com 10.5.69.207 otherserver otherserver.otherdomain.com 10.5.69.211 as2 as2.mycompany.com 10.5.69.221 as5 as5.mycompany.com 10.5.69.222 as1 as1.mycompany.com mycompany.com 10.117.32.5 Rchserver3 Rchserver3.Remote.com 10.117.33.24 Rchserver2 Rchserver2.Remote.com 127.0.0.1 LOOPBACK LOCALHOST Implementing Primary and Secondary DNS Servers 53 Figure 45 shows the DNS start sequence. Figure 45. Right Click on DNS to Start the DNS Server The DNS Server status is now Started. This may take a minute. Once the DNS Server is started, there should be one job named QTOBDNS active in the QSYSWRK subsystem on AS1. 3.2.5 Verifying That the DNS Server is Operational The last step, of course, is to make sure that the name server is working properly. The DNS job logs and NSLOOKUP are the best sources to check for errors and verify that the DNS is operating as expected. 3.2.5.1 Reviewing DNS Job Log QTOBDNS for Errors It is always a good idea to review the QTOBDNS job log for any errors: 1. From the AS1 command line, enter: wrkactjob sbs(qsyswrk) job(qtobdns) 2. Take Option 5 to work with the job. 3. Take Option 10 to display the job log. 4. Press F10 to display all messages in the job log. You may have to roll up or roll down to view all the messages. 5. Review messages for any errors. Figure 46 shows the QTOBDNS job log after a successful startup of the DNS server. 54 AS/400 TCP/IP DNS and DHCP Support Figure 46. QTOBDNS Job Log After a Successful Startup of DNS Server Note the error message: Could not assign address to socket. Displaying Message Details shows the message ID DNS00E9. This error message may or may not be a problem. For further details on this error message and what to do if the job log contains it, see , “Problem Symptom 4:” on page 209. 3.2.5.2 Using NSLOOKUP to Verify the DNS Configuration for Mail NSLOOKUP is an interactive tool that can be used on the AS/400 system to simulate a client querying a DNS server. We used nslookup to verify that the host AS1.mycompany.com, the alias mycompany.com, and the MX record for *.mycompany.com are configured correctly on the AS1 DNS server. To start an NSLOOKUP session, enter the command: call pgm(qdns/qtoblkup) Address Query Type Using NSLOOKUP We use query type A (address) to query A (address) records in the name server. The NSLOOKUP default query type is the A (address) record query, thus, from an NSLOOKUP session, enter: as1.mycompany.com. Figure 47 shows the results of this query. To the right of the > symbol, you can see the query that we entered before. The text that follows that line is NSLOOKUP answer. Server and Address refer to the name server NSLOOKUP is querying. The next Name and Address is the DNS server response to the A record Job . . : QTOBDNS User . . : QTCP Number . . . : 013973 >> CALL PGM(QDNS/QTOBDNS) PARM('-p' '53' '-d' '0' '-b' '/QIBM/UserData/OS400/ DNS/BOOT') DNS server starting. Could not assign address to socket. primary zone mycompany.com (serial number 886456347) loaded successfully. primary zone 69.5.10.in-addr.arpa (serial number 886456347) loaded successfully. primary zone 1.1.10.in-addr.arpa (serial number 886456347) loaded successfully. primary zone 62.5.10.in-addr.arpa (serial number 886456347) loaded successfully. cache zone . (serial number 0) loaded successfully. Ready to answer queries If an IP interface is started after the DNS server starts, the DNS server must be stopped and started again or the Update Server function from Operations Navigator must be run before the name server can accept queries on the newly started IP interface. Tip Implementing Primary and Secondary DNS Servers 55 query: the IP address of AS1.mycompany.com is 10.5.69.222. We happened to issue an A record query for the same host that runs the DNS server. Figure 47. A Record Query for AS1.mycompany.com Using NSLOOKUP MX Record Query Using NSLOOKUP for Unknown Host To issue an MX record query using nslookup, we first need to change the query type. Enter the NSLOOKUP command: SET TYPE=MX If we issue an MX query for a host that does not have an A record in the DNS configuration, the name server uses the wildcard MX record for *.mycompany.com. that we configured in Section 3.2.3.3, “Configuring the Domain’s Mail Server in the DNS Server” on page 48. The name server answers that AS1.mycompany.com is the mail exchanger for the domain mycompany.com. For example, Figure 48 shows the result of an MX query for anyhost.mycompany.com. The AS1 name server does not have an A record for anyhost, which you can verify by referring to Figure 31 on page 43. Figure 48. Nslookup MX Query for Unknown Host Anyhost However, if an MX query is issued for a host that has an A record on the name server such as AS2, the name server does not use the wildcard MX record and > > as1.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 Name: as1.mycompany.com Address: 10.5.69.222 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window Default Server: as1.mycompany.com Address: 10.5.69.222 > > set type=mx > > anyhost.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 anyhost.mycompany.com preference = 0, mail exchanger = as1.mycompany.com mycompany.com nameserver = as1.mycompany.com as1.mycompany.com internet address = 10.5.69.222 > 56 AS/400 TCP/IP DNS and DHCP Support simply returns a negative response to the MX query. Figure 49 shows an example of this. Figure 49. Nslookup MX Query for Host AS2 If an SMTP client tried to deliver mail to AS2, it first queries for an MX record and receives a negative response. Next, the SMTP client queries for an A record for AS2, receives a positive response, and attempts to establish a connection with the SMTP server running on AS2. In our scenario, however, AS2 is not a mail server; therefore, there is no SMTP server running on AS2 and an SMTP client fails to establish a connection. MX Query Using NSLOOKUP for Domain Mycompany.com What does the name server do with an MX query for the domain mycompany.com? Will the wildcard MX record *.mycompany.com be used? The answer is no. See Figure 50. Figure 50. Nslookup MX Query for Domain mycompany.com. As we discussed in Section 3.2.3, “Configuring AS1 as a Mail Server” on page 44, if mail is addressed to user@mycompany.com, it will be delivered. How does it get delivered? The secret to getting mail delivered when it is addressed to the domain only and not is having the alias of mycompany.com listed in the AS/400 local host table and having the Search First parameter set to *LOCAL. This was outlined in Section 3.2.3.3, “Configuring the Domain’s Mail Server in the DNS Server” on page 48. This causes the SMTP server on AS1 to search the local host table first, find the alias mycompany.com for AS1, determine that the mail is destined for the same AS/400 system that the SMTP is running on, and then attempt to find the POP3 directory entry to deliver the mail. In this case, the DNS server on AS1 is not involved in helping to deliver the mail. > > set type=mx > > as2.mycompany.com. Server: as1.mycompany.com Address: 9.5.69.222 *** No mail exchanger (MX) records available for as2.mycompany.com. > ===> > > set type=mx > > mycompany.com. Server: as1.mycompany.com Address: 9.5.69.222 *** No mail exchanger (MX) records available for mycompany.com. > ===> Implementing Primary and Secondary DNS Servers 57 3.2.6 Creating a Secondary DNS Server We do not recommend that your network relies on only one DNS server for availability reasons. Once the primary name server is operational, we need to create a secondary domain name server that can back up the primary DNS server and also be used to distribute the DNS query workload between two or more servers. After you configure and start the secondary name server, it attempts to do a zone transfer of the domain files that reside on the primary name server. The server that the secondary name server gets its domain files from is called master server. The master server can be a primary domain name server or another secondary name server. In this scenario, AS5 is the only secondary name server; thus, its master name server must be the primary name server AS1. 3.2.6.1 Configuring the Secondary Server AS5 Use Operations Navigator DNS server configuration to configure AS5 as a secondary DNS server. Three secondary domain files need to be created on AS5 to fully back up AS1: • mycompany.com forward mapping secondary domain file • 62.5.10.in-addr.arpa reverse mapping secondary domain file • 69.5.10.in-addr.arpa reverse mapping secondary domain file To create mycompany.com forward mapping secondary domain file: 1. Double-click as5.mycompany.com. We are now using Operations Navigator to configure AS5. 2. Double-click Network. 3. Double-click Servers. 4. Double-click OS/400. 5. Double-click DNS. 6. Double-click DNS Server. 7. The DNS configuration wizard starts if this is the first time you are configuring DNS on AS5. 8. Click Next. 9. Click the radio button to the left of Secondary Server when the wizard asks which type of server you want to configure. 10.Click Next. 11.Enter the domain that this server will be secondary for: mycompany.com. 12.Enter the IP address of the primary name server. In this case, it is 10.5.69.222, which is the IP address of the AS1 AS/400 system. 13.Click on Finish. At this point, the mycompany.com domain shows up under Secondary Domain. This is only one of three domain files that you need created to fully back up the AS1 primary domain name server. Let’s check to make sure that the wizard enabled save copies of the master server files. This guarantees that the secondary domain files are backed up on the secondary server. The secondary server attempts to do a zone transfer every time it starts. If the RFCs recommend that a secondary name server does not get zone tranfers from another secondary DNS server. Tip 58 AS/400 TCP/IP DNS and DHCP Support primary server is down at that time, the secondary server runs from the backup files. It uses the backup files until the data expires (or a new transfer is successful). • Double-click Secondary Domains in AS5’s DNS server configuration. At this point, there should be one secondary domain of mycompany.com. • Double-click the mycompany.com secondary domain. • Ensure that Save copies of the master server files is checked off. • Click OK. Creating 62.5.10.in-addr.arpa Reverse Mapping Secondary Domain File 14.Right click Secondary Domain. 15.Select New Secondary Domain. 16.Enter the domain: 62.5.10.in-addr.arpa. Ensure that Save copies of the master server files is checked off. 17. Click Add. 18. Enter the IP address of the primary name server (AS1): 10.5.69.222. 19. Click OK. Create 69.5.10.in-addr.arpa Reverse Mapping Secondary Domain File 20.Repeat the previous steps 14 to 19 to create the 69.5.10.in-addr.arpa secondary domain file. The only difference is step 16; this time the domain is: 69.5.10.in-addr.arpa 21.Close the DNS window to save the secondary domain configuration on AS5. 3.2.6.2 Adding NS Record for the Secondary Name Server on AS1 It is good practice, even when not mandatory, to add an NS resource record for the secondary name server in the primary domain files on the primary DNS. When the primary name server responds to queries from clients, the response includes the IP address of the secondary name server if the primary knows about it (NS record for secondary is in primary’s configuration). If the client’s resolver is smart enough to handle this information, it decides which name server, primary or secondary, is closer based on the IP address. If the secondary DNS server is closer to the client, it sends future queries to it, improving name resolution response time. To add a resource NS record on AS1’s primary domain files, use the following steps: 1. Start the AS1 DNS server configuration in Operations Navigator. 2. Right click on the primary domain file mycompany.com. 3. Select Properties. 4. Select the Secondary Name Server tab. 5. Click Add. 6. Verify that the domain name is mycompany.com. This is the domain name that the secondary server is located in. Do not forget the trailing period after com. 7. Click OK. 8. Enter the host name of the secondary name server AS5. 9. Click OK. See Figure 51 to review the result. 10.Click OK. Implementing Primary and Secondary DNS Servers 59 Figure 51. Enrolling the Secondary Server AS5 in the Primary Server’s List of Name Servers 11.Repeat steps 2 through 9 for the primary domain file of 69.5.10.in-addr.arpa and again for the primary domain file of 69.5.10.in-addr.arpa. The domain name should be the domain name of the secondary name server, which is mycompany.com. 12.If the DNS server is started, click the Update Server smart icon to save the changes to a file and send a signal to the DNS server to reread its configuration files. 13.If the DNS server has been stopped, close the DNS window and start the DNS server. A secondary name server does not successfully initiate a zone transfer if the primary name server is not started. 3.2.6.3 Starting the Secondary Name Server We are now ready to start the secondary name server on AS5. Start the AS5 name server with Operations Navigator or with the AS/400 command on AS5: strtcpsvr *dns As the DNS secondary name server starts on AS5, it attempts a zone transfer from AS1 to transfer the three domain files to AS5. The DNS server on AS1 (the primary name server) needs to be active at this time for the zone transfers to be successful. On AS5, the secondary DNS server job QTOBDNS starts as well as jobs named QTOBXFER. Each QTOBXFER job is responsible for one of the zone transfers on AS5. Each QTOBXFER job ends as soon as the zone transfer finishes. In this scenario, AS5 initiates three zone transfers, one for each domain: mycompany.com, 62.5.10.in-addr.arpa, and 69.5.10.in-addr.arpa. Figure 52 on page 60 shows the QTOBDNS job log on the secondary DNS server, AS5. On As1, the primary name server QTOBDNS job should already be active if the DNS server is started. During the zone transfers, one job named QTOBXMIT starts on the primary name server for every zone transfer that takes place. Each QTOBXMIT ends when the zone transfer it is responsible for finishes. 60 AS/400 TCP/IP DNS and DHCP Support After the DNS server is started, the QTOBDNS job should remain active in the QSYSWRK subsystem on AS5. Review the QTOBDNS job log on both the primary and secondary name servers for any errors and to verify that the domains transferred successfully. See Figure 52 for an example of a secondary name server’s QTOBDNS job log after a successful start. If there are error messages in the job logs regarding the zone transfer, see 8.2, “Problem Symptoms and Probable Causes” on page 207. There are several Problem Symptoms documented in this section dealing with why a zone transfer may fail. Figure 52. QTOBDNS Job Log on Secondary System After DNS Server Successfully Starts 3.2.6.4 Controlling Zone Transfer Frequency By now it should be clear that the DNS administrator only updates the DNS files on the primary name server and the secondary name server automatically performs zone transfers of the data from the primary name server (or another secondary name server) to keep its domain data in sync with the domain data on the primary name server. How often should a secondary name server check with the primary name server to make sure its data is in sync with the primary? The answer depends on how often the primary name server is updated with changes; thus, it varies from installation to installation. Therefore, refresh rates and other associated timers can be configured on the primary name server. These configuration rates and times are on the Properties of each primary domain file on the primary name server. Usually the supplied defaults are acceptable for a typical network serviced by a DNS server. Figure 53 shows the defaults for the primary domain mycompany.com on AS1. Refer to Section 8.1.2, “Tips for Performance” on page 186 for performance considerations. Job . . : QTOBDNS User . . : QTCP Number . . . : 045012 >> CALL PGM(QDNS/QTOBDNS) PARM('-p' '53' '-d' '0' '-b' '/QIBM/UserData/OS400/ DNS/BOOT') DNS server starting. secondary zone mycompany.com (serial number 886456347) loaded successfully. secondary zone 62.5.10.in-addr.arpa (serial number 886456347) loaded successfully. primary zone 0.0.127.in-addr.arpa (serial number 886464830) loaded successfully. secondary zone 62.5.9.in-addr.arpa (serial number 886456347) loaded successfully. cache zone . (serial number 0) loaded successfully. Ready to answer queries. Implementing Primary and Secondary DNS Servers 61 Figure 53. Retry and Refresh Rates for Secondary Name Servers The information on the Properties page of the primary domain file mycompany.com shown in Figure 53 is included in the SOA resource record in the file /QIBM/UserData/OS400/DNS/mycompany.com.db file. Let’s define what these numbers mean: • Secondary server refresh interval: A secondary server checks to see that it is in sync with the primary by checking the serial number contained in the SOA record. Every time a DNS administrator makes a change to a primary domain file, Operations Navigator automatically increments this serial number. In fact, after starting the primary name server, the QTOBDNS job log contains a message stating the serial number that the primary domain file is running with. See Figure 46 on page 54 for an example of this. The secondary server refresh interval of three hours means that the secondary server checks the primary name server’s serial numbers of the domain files it is configured to back up every three hours. If the serial numbers are different, the secondary name server attempts a zone transfer to refresh its domain files. If the serial numbers are the same, then the zone transfer is not needed and does not take place. • Secondary server retry interval: Specifies the time interval that elapses before the secondary domain server can re-attempt to refresh its data from the primary domain server after the previous refresh attempt failed. You can specify the time in seconds, minutes, hours, and days. • Secondary server expire interval: If the secondary domain files are configured with Save copies of master server data enabled, then a secondary name server saves backup copies of the domain files after successful zone transfers. This allows a secondary name 62 AS/400 TCP/IP DNS and DHCP Support server to start up from these backup files and then attempt a refresh. If the refresh fails, the secondary name server continues to be active but it serves responses from its backup files that may be down level from the primary domain files on primary name server. The secondary server expire interval of seven days means that the secondary name server can run from its backup files for a limit of seven days. After seven days, the backup files expire and the secondary name server can no longer use them. Remember, if the refresh from the master server fails, the secondary name server re-tries with the frequency specified in the Secondary server retry interval field (every hour by default). Thus, for the secondary server’s backup files to expire, the master name server must be down for seven days. • Default cache time for domain data: This timer affects all the name servers that query this primary name server. When a name server queries AS1 and gets a positive response, it caches the response (saves it) so if another client queries the name server for the same information, the name server can supply the response from its cache and not have to query the AS1 name server again. Default cache time for domain data controls how long name servers can keep AS1’s positive responses in their cache. The default setting for this parameter is one day. Negative responses are cached for a hard-coded value of 10 minutes. This value cannot be configured. • Start of Authority Cache Time: By default, the Properties page of each primary domain file leaves this parameter blank. However, this does not mean that the SOA record does not have a cache time setting but rather the SOA cache time defaults to whatever the default cache time for domain data is set to (which is one day, by default). Let’s further explain. The primary domain file contains several types of resource records: one SOA record, at least one NS record, several A records, perhaps CNAME records, and perhaps MX records. Each of these resource records can have a cache timer associated with it, which, for that particular resource record, overrides the default cache time for domain data settings. The Properties page of the mycompany.com domain contains the timer for only the SOA resource record. So where are the timers configured for other resource records? Let’s take the A record for example: • Double-click on the mycompany.com primary domain file. • Right click on any host (for example, AS2.mycompany.com). • Click Properties to display the individual host AS2’ properties. • Note that there is a cache time parameter here that defaults to blank. See Figure 54. Implementing Primary and Secondary DNS Servers 63 Figure 54. Cache Time for Individual Host AS2 AS2’s cache time is blank, which means that the default cache time for domain data of one day is what AS2’s cache time defaults to. 3.2.7 Primary Name Server Security Considerations The information in the DNS files is security sensitive and you may want to restrict the secondary name servers that are authorized to do zone transfers. Furthermore, you may also want your DNS to answer queries from a pre-determined set of clients. This section discusses some techniques that make your name server more secure. 3.2.7.1 Zone Transfer Security By default, the primary name server allows any secondary name server to request a zone transfer. If you want to restrict zone transfers to only authorized secondary name servers, you can do so by configuring the trusted name servers IP addresses. Use the following steps: 1. Start AS1 DNS server configuration in Operations Navigator. 2. Right click DNS Server-As1.mycompany.com. 3. Click Properties. 4. Select the Security tab. If you determine that you need to override one of the default timers on the mycompany.com properties page in Figure 53 on page 61, do not forget to change the same timers on the 62.5.10.in-addr.arpa properties page and the 69.5.10.in-addr.arpa properties page. Remember, the in-addr.arpa files are primary domain files just the same as mycompany.com. The changes in any of the tabs in mycompany.com’s properties page only affects the primary domain file of mycompany.com. You must remember to consider making matching configuration changes to the in-addr.arpa primary domain files if you make configuration changes to the mycompany.com primary domain file. Tip 64 AS/400 TCP/IP DNS and DHCP Support 5. Click Add. 6. Enter the IP address of the secondary name server: 10.5.69.221 and the mask of 255.255.255.255. See Figure 55. This creates a XFRNETS directive in the BOOT file that takes as its arguments the networks or IP addresses you want to allow to transfer zones from your name server. Figure 55. Authorizing Secondary Name Server AS5 to the Primary Name Server AS1 7. Click OK. 8. Close the DNS server configuration window. 9. Right click DNS. 10.Click Stop. 11.Click Start to start the DNS server again. In this example, we are authorizing a specific IP address of a secondary server, AS5. If we want to authorize any secondary server in the network 10.5.69.192, we specify an IP network of 10.5.69.192 with a mask of 255.255.255.192. Tip • When changing the contents of the Security tab on the DNS server, you must run the Update Server function from Operations Navigator for the change to take effect. • When the Security tab of the DNS server is empty of secondary servers, ANY secondary server is authorized to do a zone transfer from the primary server AS1. As soon as the Security tab is configured with one IP address of a secondary server, then all other secondary servers are denied Tips Implementing Primary and Secondary DNS Servers 65 3.2.7.2 Restricting Queries by Client’s IP Address It is possible to configure each primary domain file to allow clients with only certain IP addresses to query this primary domain data. From Operations Navigator, go into the primary server AS1’s DNS configuration. To authorize only certain clients to query AS1, use the following steps: 1. Start AS1 DNS server configuration in Operations Navigator. 2. Double-click Primary Domains. 3. Right click mycompany.com. 4. Select Properties. 5. Select the Security tab. Note: When we display the Security tab, the Limit domain data access to subnets list and the Limit domain data access to IP address list are both blank by default. When both of these lists are blank, that means that ANY client that knows the primary name server’s IP address and has TCP/IP connectivity to this AS/400 system can successfully query the AS1 name server. 6. Click on Add to add the subnet that you want to allow. 7. Enter the subnets of clients that you want to allow to query this name server by entering the subnet’s network address: 10.5.69.192 and mask 255.255.255.192. 8. Click on OK (but do not click on the second OK just yet). Now that we have authorized all clients located in the subnet 10.5.69.192 to query the primary name server AS1, we have implicitly denied clients from all other subnets. We have even denied access from localhost. For this scenario, we should also give access to clients from the 10.5.62.0 network and give access to the explicit address of 127.0.0.1 for localhost. 9. Repeat steps 6 through 8 for the subnet of 10.5.62.0. The subnet mask for 10.5.62.0 subnet is 255.255.255.0. 10.Click on the second Add to Limit domain data access to IP addresses. 11.Enter the IP address of localhost 127.0.0.1. See Figure 56 to view the result. Note: Once you specify an address or subnet on the primary domain properties’ security tab, it is required to specify the 127.0.0.1 IP address in the Limit domain data access to IP address list. 12.Click on OK. The previously explained configuration adds secure-zone TXT records to the primary domain configuration file. The secure-zone record defines an access list of IP addresses allowed to query your name server for data in a particular zone. We just authorized all clients from two subnets, 10.5.69.192 and 10.5.62.0, to access the mycompany.com primary domain file on the primary name server AS1. However, when a client on one of those networks issues a reverse look up query to the AS1 name server, will it be successful? Yes, because the security tabs on both the 69.5.10.in-addr.arpa and the 62.5.10.in-addr.arpa primary domain files are still blank; by default -- any client has access to these files. Thus, if we need to have the same security on 69.5.10.in-addr.arpa and the 62.5.10.in-addr.arpa primary domain files as we do on mycompany.com, we need to repeat steps 3 through 12 for each in-addr.arpa file. And do not forget step 11; 66 AS/400 TCP/IP DNS and DHCP Support which specifies the localhost IP address to be an authorized address on both in-addr.arpa primary domain files’ properties security tab also. Figure 56. Restricting DNS Queries by Subnet and Client’s IP Address 3.2.8 Reconfigure Clients to Use the DNS Server Now that the primary and secondary DNS servers are active on AS1 and AS5 systems, it is time to reconfigure clients to start using the DNS servers. An AS/400 system can be a client to a name server; thus, the AS/400 systems in the mycompany.com domain require a configuration change also. 3.2.8.1 Configuring AS/400 Systems to Query the DNS Server To reconfigure the AS1system to query the DNS server, enter the command: CFGTCP option 12 Figure 57 on page 67 shows the resolver configuration on AS1. Host name search priority specifies whether to search a remote Domain Name Server (DNS) to resolve a TCP/IP host name, or to search the local TCP/IP host table first. *LOCAL means that we want this system to first search the TCP/IP host table located on this system to resolve TCP/IP host names. NOTE: Because AS1 is the mail server, we configured the Host name search priority to be *LOCAL. See Section 3.2.3.4, “Verifying the TCP/IP and SMTP Configuration on AS1” on page 50, for details. Figure 57 is taken from AS1; thus, it shows Search First=*LOCAL. Specify *REMOTE if you want this system to search a remote DNS server to resolve TCP/IP host names before searching the local TCP/IP host table. The remote DNS server to use is specified by the Internet address parameter. For this scenario, all the AS/400 systems in the mycompany.com domain except for AS1 specify *REMOTE. Implementing Primary and Secondary DNS Servers 67 Internet addresses specifies up to three remote Domain Name Servers (DNS) to be used by this system. In our scenario, the primary name server IP address is 10.5.69.222 and the secondary name server IP address is 10.5.69.221. Figure 57. Configuring the AS/400 Resolver 3.2.8.2 Configuring Non-AS/400 Clients to Query the DNS Server All the clients in your network should have the DNS configuration updated to query the newly implemented primary and secondary DNS servers. How to make this configuration change depends on your clients DNS support; therefore, you do not provide instructions on how to update non-AS/400 client’s DNS configuration. Figure 58 shows the DNS configuration for a Windows 95 client. Often some information in the host table and *LOCAL can keep systems operational to some degree even if the system cannot get to the remote name server. If something happens to the interfaces or the servers, applications can hang, trying to contact the servers and never get to the host table if *REMOTE is selected. For instance, if local host information is in the host table, local mail can still be delivered if *LOCAL is selected. Tip Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . 'AS1' Domain name . . . . . . . . . . 'mycompany.com' Host name search priority . . . *LOCAL *REMOTE, *LOCAL, *SAME Internet address . . . . . . . '10.5.69.222' '10.5.69.221' If the DNS server specified first cannot be reached, the AS/400 system queries the next name server in the list. If the name server configured at the top of the list respond but sends back a negative response (in other words, the first name server does not know the answer), the AS/400 resolver queries the subsequent name servers in the list. Note 68 AS/400 TCP/IP DNS and DHCP Support Figure 58. Windows 95 Client DNS Configuration The DNS administrator should be aware that an answer from a secondary name server is considered "as good" as an answer from a primary name server; they are both called authoritative answers. Thus, to balance the name serving workload between AS1 and AS5, the DNS administrator can configure half of the clients in mycompany.com to list AS1’s IP address first in the client’s DNS server configuration and the other half of the clients to query AS5 first by entering AS5’s IP address (10.5.69.221) at the top of the list as shown in Figure 58. 3.3 Summary In this chapter, we took you step-by-step through the implementation of the primary DNS, starting with the migration of the AS/400 host table. We showed you how to use DNS server configuration through Operations Navigator, discussed the main files in the name server database, and showed you how to configure the AS/400 system as a mail server, including special considerations relative to DNS and mail. We also explained how to implement a secondary DNS to back up the primary name server. And we showed you how to verify that a name server is operational and functioning as expected. In addition, we discussed name server security considerations and how to configure the security features in the primary name server to control zone Implementing Primary and Secondary DNS Servers 69 transfers and access to the DNS based on subnet ID or client IP address. We also covered how to configure clients to query name servers. 70 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 71 Chapter 4. Migrating an NT Primary DNS to AS/400 System You may have already implemented a name server before the AS/400 system announced DNS server support on V4R2. If you are thinking of making your AS/400 system the primary DNS server, this chapter describes how to migrate an existing primary DNS on a non-AS/400 platform (NT in our scenario) to AS/400 DNS. This chapter also explains how to use the existing NT server as a secondary name server for backup and workload balancing purposes. 4.1 Migrating NT DNS Server Primary Domain Files The migration from a DNS server on a non-AS/400 platform (NT in our scenario) to the AS/400 DNS server is fairly simple provided the non-AS/400 platform DNS server supports DNS files in the format described by RFC 1035. This is the case for the NT 4.0 DNS server. Figure 59 shows an overview of how to migrate NT DNS files to the AS/400 DNS server. The first step is to bring the files from the NT server to the AS/400 IFS directory \QIBM\UserData\DNS. The second step is to run Import Domain Data. Finally, you may need to make some manual adjustments to the DNS server configuration on the AS/400 system. Migration should be between servers at the same level of BIND. The AS/400 system DNS support is BIND 4.9.3. Versions of BIND beyond 4.9.3 may have records not recognized by the AS/400 system BIND 4.9.3 server. Tip 72 AS/400 TCP/IP DNS and DHCP Support Figure 59. Migrating NT DNS Server to AS/400 DNS Server Overview 4.1.1 Scenario Objective The objectives of this scenario are to: 1. Show how to migrate an existing primary DNS on a non-AS/400 platform to an AS/400 DNS server. 2. Show how to configure the existing primary DNS as a secondary DNS to back up the new primary name server running on the AS/400 system. 4.2 Task Summary 1. Review the current DNS server configuration on the NT server. 2. Transfer the DNS db files from the NT server to the IFS on the AS/400 system. 3. Import the domain data. 4. Perform final configuration adjustments if necessary. 4.2.1 Reviewing Primary DNS Configuration on the NT Name Server To review the current DNS configuration on the NT server, use the following steps: 1. Select Administrative Tools (Common) from the Start pop-up menu. 2. Select DNS Manager. 3. Click + next to NTSERVER1 (NT server name). Migrating an NT Primary DNS to AS/400 System 73 4. Double-click mycompany.com. The primary domain forward mapping file is displayed on the right panel (Figure 60). You can display the content of the reverse mapping files *.in-addr.arpa as well. Figure 60. NT DNS Server db Files 5. Right-click NTSERVER1 and select Properties. 6. Click Forwarders. Take note of the forwarders configuration (Figure 61). Figure 61. NT DNS Fowarders Configuration 7. Select the Boot Method tab. Usually, the NT DNS server is configured to boot from the data contained in the registry, which means that the BOOT file cannot be migrated and you need to manually make adjustments to the AS/400 DNS server configuration. 4.2.2 Transferring DNS Files from the NT Server to the AS/400 System IFS The NT server DNS configuration files reside on the path \Winnt\system32\Dns. 74 AS/400 TCP/IP DNS and DHCP Support Note: Winnt is the directory where Windows NT is installed. In this scenario, the following DNS configuration files are in the NT server DNS directory: • mycompany.com.dns (primary domain forward mapping file) • 0.0.127.in-addr.arpa.dns (reverse mapping file for localhost) • 69.5.10.in-addr.arpa.dns (primary domain reverse mapping file) • 62.5.10.in-addr.arpa.dns (primary domain reverse mapping file) • BOOT (boot file) • Cache.dns (cache file) We need to copy the primary domain files to the IFS directory QIBM\UserData\OS400\DNS on the AS/400 system. We do not need to migrate the BOOT files since the BOOT data is in the Registry; therefore, we need to manually add the forwarders configuration after the DNS server migration. Figure 62, Figure 63, and Figure 64 show the content of the primary domain files on the NT DNS server in our scenario. Figure 62. Contents of mycompany.com.dns File on NT Server Migrating an NT Primary DNS to AS/400 System 75 Figure 63. Contents of 69.5.10.in-addr.arpa File on NT Server Figure 64. Contents of 62.5.10.in-addr.arpa File on NT Server Perform the following steps: 1. Copy the three primary domain files to the IFS directory \QIBM\UserData\OS400\DNS on the AS/400 server. mycompany.com.dns 69.5.10.in-addr.arpa.dns 62.5.10.in-addr.arpa.dns 2. Insert two or more spaces between Administrator.mycompany.com. and the parenthesis (in the SOA record in all the files you are migrating). 76 AS/400 TCP/IP DNS and DHCP Support 3. Replace the at (@) sign in the files by the domain name (mycompany.com.). Do not forget the trailing dot. Figure 65 shows the primary domain file from the NT DNS server prepared for the Import domain data function. Notice that the @ sign in the original file (see Figure 62 on page 74) is replaced by mycompany.com. and there are extra spaces between Administrator.mycompany.com. and (in the SOA record). Figure 65. mycompany.com.dns File from NT Server Prepared for "Import Domain Data" Function 4.2.3 Importing the Domain Data 1. Start Operations Navigator 2. Click + next to As1.mycompany.com. 3. Click + next to Network. 4. Click + next to Servers. 5. Click + next to OS/400. 6. Double-click DNS to start the DNS server configuration. The DNS server configuration wizard starts, assuming this is the first time you configure DNS server on this AS/400 system. 7. Click Next. 8. Click Next to bypass the Root servers window (there are no root servers in this scenario). 9. Select Primary domain server. Click Next. 10.Enter the primary domain name or accept the default if it is correct (Figure 66). Click Next. Migrating an NT Primary DNS to AS/400 System 77 Figure 66. Primary Domain Name 11.Click Add to add the local host name and IP address: Host name: localhost IP address: 127.0.0.1 Click OK. 12.Click Finish to exit the wizard. 13.The DNS server configuration created by the wizard is displayed at this point. Double-click Primary Domains to view the files that the wizard created (Figure 67). Figure 67. DNS Server Configuration Created by the DNS Configuration Wizard 14.The Import domain data function that we are running next tries to create a mycompany.com.db file. We, therefore, need to delete the mycompany.com.db file created by the wizard: 1. Right-click on mycompany.com. 2. Click Delete. 3. Click Yes to confirm the Delete. Do not delete the 0.0.127.in-addr.arpa file. 78 AS/400 TCP/IP DNS and DHCP Support 15.To import the NT server DNS configuration files, right-click Primary Domains and select Import domain data. Note: Import Domain Data flags in error "orphan" records (records not associated with a specific host name). CNAME resource records not associated with a host (with no corresponding A resource record) fall into this category. 16.Enter the name and path (or accept the default path) of the forward mapping primary domain file (Figure 68). Click OK. Figure 68. Importing Primary Domain Data 17.Repeat the previous step for all the primary domain files that you transferred from the NT server (69.5.10.in-addr.arpa. and 62.5.10.in-addr.arpa.). 4.2.4 Configure Forwarders Manually Since we are not migrating the BOOT file from the NT server, we need to manually add the forwarders configuration. Use the following steps: 1. From Operations Navigator DNS server configuration, right-click As1.mycompany.com and select Properties. 2. Click Forwarders tab. 3. Enter the IP address of the DNS server that acts as a forwarder. Verify that the box Contact only forwarders for off-site queries is checked (Figure 69). Figure 69. Adding the Firewall Secure Port IP Address to the Forwarders List Migrating an NT Primary DNS to AS/400 System 79 Note: Figure 69 shows the "slave" option enabled. The NT boot file does not have this box checked. The options forward-only directive is listed in the BOOT file since we enabled the check box Contact only forwarders for off-site queries. 4. Enable the domain files and start the DNS server. Figure 70 shows the BOOT file created in the AS/400 DNS server after the migration. Figure 70. Boot File Figure 71 shows the mycompany.com.db forward mapping file after the migration. Delete the NS record with the name of the old DNS server (ntserver1 in our scenario). Figure 71. mycompany.com.DB File After Migration 4.3 Configuring the NT DNS Server as a Secondary DNS Server After migrating the primary name server to the AS/400 system, it is a good idea to configure the NT server as a secondary server for backup purposes and workload balancing. In this section, we describe how to configure the NT server as a secondary DNS server for mycompany.com domain. 80 AS/400 TCP/IP DNS and DHCP Support 4.3.1 Deleting the Primary DNS Configuration Before configuring the NT server as a secondary server, you must delete the primary name server configuration. Use the following steps: 1. Stop the DNS server: Open the Control Panel, open Services, and select Microsoft DNS Server (Figure 72). Click Stop to stop the DNS server. Figure 72. Stop the NT DNS Using Service Window The Status should change to stopped. 2. Delete the DNS server configuration: 1. Select the Administrative Tools. 2. Select DNS manager. 3. Select mycompany.com in the Domain Name Service Manager window and right-click on it. 4. Select Delete Zone (Figure 73). 5. Click OK to confirm delete. Figure 73. Delete Primary Domain Configuration Files Repeat the previous steps to delete the other primary domain configuration files. Migrating an NT Primary DNS to AS/400 System 81 4.3.2 Configuring the Secondary Name Server To configure the NT server as a secondary name server for mycompany.com, use the following steps: 1. Select Administrative Tools. 2. Select Domain Name Services Manager. 3. Select NTSERVER1 and right-click on it. 4. Select New Zone...(Figure 74). Figure 74. Create the New Zone on the NT Server 5. Select Secondary in the Creating new zone for NTSERVER1 window (Figure 75). Click Next. Figure 75. Creating a Secondary Domain on the NT Server 6. Enter the domain name and the name of the file the secondary server must retrieve during a zone transfer (Figure 76). 82 AS/400 TCP/IP DNS and DHCP Support Figure 76. Specify Zone Name and Zone File 7. Specify the IP address of the primary name server, the AS/400 system AS1 in our scenario (Figure 77). Click Next. Figure 77. Specify Master Server’s IP Address 8. At the final confirmation dialog box, click Finish and start the secondary DNS server. 4.4 Summary In this chapter, we showed you how to migrate an existing primary DNS server running on an NT server to an AS/400 name server by migrating the DNS configuration files. We also explained how to configure the NT server as a secondary server. © Copyright IBM Corp. 1998 83 Chapter 5. Growing Your Domain: Creating Subdomains As your company grows or acquires new divisions, the need for grouping hosts by geographies or business units will arise. It also becomes very complex to administer the entire name space from a single point. This chapter explains how to create subdomains and, eventually, delegate administration from parents to children. 5.1 Scenario Overview The scenario used in this chapter builds upon the scenario used in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25. If you remember from that chapter, a primary domain server was configured on AS1 and a secondary domain server was configured on AS5. The primary domain was defined as mycompany.com. In this chapter, we grow the network by adding an additional subnet of 10.1.1.0. The hosts on this subnet have a domain name of OTHERDOMAIN.mycompany.com. See Figure 78. Figure 78. Network of mycompany.com Domain and OTHERDOMAIN Subdomain In Chapter 3, we decided not to include the OTHERSERVER host in the mycompany.com domain when planning the host table migration. By excluding OTHERSERVER, we implied that this host belonged in a domain separate from the mycompany.com domain. For example, its domain is OTHERDOMAIN.com. The domain name space inverted tree for the scenario in Chapter 3 might look similar to the tree shown in Figure 79. Router 10.1.1.0 mask:255.255.255.0 10.5.69.0 mask:255.255.255.192 10.5.62.0 mask:255.255.255.192 otherserver as5 as1 otherhost as2 p23gpb74 p23fym82 p23fzg16 otherprinter mycompany.com otherdomain.mycomapny.com .221 .7 .211 .207 .222 .58 .187 .169 .9 .2 84 AS/400 TCP/IP DNS and DHCP Support Figure 79. The Structure of Chapter 3’s Scenario’s Name Space We want to introduce the subject of subdomains in this chapter. We assume that OTHERDOMAIN.com was purchased by mycompany.com and now we want it to be a subdomain of the mycompany.com domain. The structure of this chapter’s name space looks different from Figure 79. See Figure 80 for the new structure of the name space that we use in this chapter. Figure 80. The Structure of This Chapter’s Scenario’s Name Space Figure 80 shows that the OTHERDOMAIN domain is a subdomain of mycompany.com. The absolute domain name of OTHERDOMAIN is OTHERDOMAIN.mycompany.com. (the trailing period indicates the absolute domain, not the end of the sentence). Another way of saying this is that OTHERDOMAIN is now part of mycompany.com. Therefore, the question comes up that if this new subdomain is part of mycompany.com, how is the DNS server com mycompany otherdomain com mycompany otherdomain Growing Your Domain: Creating Subdomains 85 on AS1 configured to answer queries about the new hosts in OTHERDOMAIN.mycompany.com? There is more than one answer to this question. This chapter outlines two methods of including the hosts in OTHERDOMAIN.com under OTHERDOMAIN.mycompany.com. for the purposes of DNS. To understand the difference between the two methods of including OTHERDOMAIN.com in mycompany.com, you need to understand the concept of a zone of authority and the difference between a domain and a zone as described in Chapter 1, “Domain Name System Concepts and Overview” on page 3. In Method 1, we propose to maintain one zone of authority over all of mycompany.com including OTHERDOMAIN.mycompany.com. This means that the AS1 name server is authoritative over all of mycompany.com. The grayed rectangle in Figure 81 represents the mycompany.com zone of authority. This zone of authority does include the OTHERDOMAIN.mycompany.com subdomain. Remember the OTHERDOMAIN.mycompany.com subdomain is part of mycompany.com domain. Figure 81. Method 1: One Zone of Authority for mycompany.com. Another way of saying this is that the AS1 name server is responsible for answering queries for every host within the grayed rectangle in Figure 81. Consequently, the DNS administrator for AS1 is responsible for maintaining the DNS configuration for any changes in the network that are included within the grayed rectangle in Figure 81. But let’s consider the situation in which the OTHERDOMAIN.mycompany.com subdomain starts growing and many hosts are added to the OTHERDOMAIN.mycompany.com subdomain. What if the AS1’s DNS administrator does not have time to configure new OTHERDOMAIN.mycompany.com hosts in the DNS configuration on AS1 and wants to have the OTHERDOMAIN.mycompany.com subdomain hosts maintained on another DNS server and have another DNS administrator maintain the growing OTHERDOMAIN.mycompany.com subdomain? This is called delegating authority, which is described by Method 2. com mycompany.com. otherdomain.mycompany.com. Zone of authority for mycompany.com 86 AS/400 TCP/IP DNS and DHCP Support In Method 2, we propose to delegate the authority of the OTHERDOMAIN.mycompany.com subdomain out of mycompany.com’s zone of authority to create two zones of authority. These two zones are represented by two separate grayed rectangles in Figure 82. The AS1 DNS server is authoritative over the mycompany.com zone and a new DNS server is authoritative over the OTHERDOMAIN zone as shown in Figure 82. Remember that the OTHERDOMAIN.mycompany.com subdomain is still part of the mycompany.com domain, even with a Method 2 configuration. So far in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25 and this chapter, we have used the terms of primary domain, secondary domain, primary name server, secondary name server, and authoritative. Let’s review the definitions of these terms: • Primary name server - This server is the server that the hosts in the zone of authority are configured on. It is the server that the DNS administrator configures and maintains. When this server gives responses to queries from its primary domain files, the responses are called authoritative. • Secondary name server - This server has the same information on it as the primary name server. However, instead of getting its information directly from the DNS administrator configuring it, it gets its information from the primary server through zone transfers over the network. A secondary name server is used for two reasons: spreading the DNS query workload over more than one server and as a backup. When the secondary name server gives out a response to a query, the response is also called authoritative. In other words, an answer from a secondary name server is considered to be just as "good" as if the answer came from a primary name server. • Primary domain files - These files are the files configured on the primary name server. • Secondary domain backup files - These files contain information that was acquired from zone transfers from the primary name server. They exist on the secondary name server. These files only exist if you checked the box Save copies of master server data when configuring the secondary domain. This checkbox specifies whether you want to backup the domain data that this secondary server receives from the primary domain server. The advantage of backing up the domain data is that the secondary server can function even if the primary server is down. If you do not check this box, the zone transfer information only exists in cache. When the secondary server boots, it checks first to see if a backup exists. If it does, it automatically loads the backup file. It then contacts the primary server to see if the primary server has more recent data. If the data is more recent, then the secondary server loads that data from the primary server through a zone transfer. • Authoritative - A server that is considered to be authoritative for a domain is either the primary server for that domain or a secondary server for that domain. In Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25, both AS1 and AS5 name servers are authoritative for the mycompany.com domain. If another name server or a client queries either AS1 or AS5 for information in the mycompany.com domain, the response is considered to be authoritative. Can a name server that is not authoritative over a domain give a response to a client about that domain and have that response considered an authoritative response? The answer is yes. If the non-authoritative server does not know the answer and queries an authoritative name server on behalf of the client and then returns the answer Growing Your Domain: Creating Subdomains 87 to the client, this response is considered to be authoritative. The non-authoritative name server will cache this information. If a second client requests this same information from the non-authoritative name server (and this information is still in its cache), the name server gives the response to the client but now this same information is labeled non-authoritative. Why? Because the information in the response this second time came out of the name server’s cache. Another way of saying this is that a non-authoritative response at some point came out of a name server’s cache. Figure 82. Method 2: Two Zones of Authority: mycompany.com Zone and OTHERDOMAIN Zone 5.1.1 Scenario Objectives In this scenario, we have the following objectives: 1. Method 1: Create a subdomain OTHERDOMAIN.mycompany.com within the domain of mycompany.com, keeping authority within mycompany.com. Either the primary DNS on AS1 or the secondary DNS on AS5 responds to DNS queries. 2. Method 2: Delegate authority of the subdomain OTHERDOMAIN.mycompany.com to a child DNS server, which, in this scenario, is OTHERHOST. 3. Test Method 2 configuration with nslookup. 4. Explain how the network’s mail configuration may change with Method 2. 5. Explain how a DNS server will answer a forward mapping query for a multi-homed host (for example, OTHERSERVER). 5.1.2 Scenario Advantages In this scenario, we must consider the advantages of keeping centralized control (Method 1) and the advantages of delegating authority (Method 2). com mycompany.com otherdomain.mycompany.com Zone of authority for mycompany.com Zone of authority for otherdomain.mycompany.com 88 AS/400 TCP/IP DNS and DHCP Support Advantages of Keeping Authority Over Subdomain Otherdomain When a network is not too large, the simplest way to add a subdomain is to include it in the primary domain’s zone. This means that the primary domain’s administrator (mycompany.com’s DNS server administrator on AS1) is responsible for the DNS configuration of hosts in OTHERDOMAIN.mycompany.com. The same name server (AS1) answers DNS queries for hosts in both mycompany.com and its subdomain, OTHERDOMAIN.mycompany.com. Control over the DNS configuration of OTHERDOMAIN.mycompany.com remains in the hands of the same DNS administrator and the OTHERDOMAIN.mycompany.com DNS configuration remains on the same name server, AS1. Advantages of Delegating Authority of Otherdomain When a network becomes large and administering the DNS configuration of the zone becomes too much workload for one person, then delegating authority is the recommended technique to spread the administrative workload across more than one person and name server. 5.1.3 Scenario Disadvantages In this scenario, we must consider the disadvantages of keeping centralized control (method 1) and the disadvantages of delegating authority (method 2). Disadvantages of Keeping Authority Over Subdomain Otherdomain If the subdomain OTHERDOMAIN.mycompany.com becomes very large, it may not be practical to have one DNS administrator maintain the mycompany.com’s zone of authority, which, in this case, includes the subdomain of OTHERDOMAIN.mycompany.com. It is tempting to add a subdomain to the original primary domain and keep authority with the thought that if the administration gets to be too much workload for one person, then we delegate authority later. Although this is possible, keep in mind that the administration work of adding OTHERDOMAIN hosts to the primary DNS domain files must be repeated on another server at a future delegation time. In other words, there is no automated way to "port" OTHERDOMAIN.mycompany.com’s hosts from the parent DNS server to the child DNS server. Disadvantages of Delegating Authority of Otherdomain Delegating part of the zone of authority away means that another name server is required to become the primary DNS server for the OTHERDOMAIN.mycompany.com. In our scenario, an AS/400 system on the 10.1.1.0 network becomes the primary DNS server for OTHERDOMAIN.mycompany.com while AS1 continues to be the primary server for the remaining zone of authority of mycompany.com. If an additional host with DNS capability does not exist in the network, delegation of part of the mycompany.com’s zone of authority is not possible. Delegation also implies that there is another DNS administrator with the necessary skill to maintain the new zone of authority. A person with these skills must be available to take over the workload of administering the new zone of authority or a new person must be trained to perform this task. If the OTHERDOMAIN.mycompany.com subdomain is delegated and the primary domain files for OTHERDOMAIN.mycompany.com are located on another server Growing Your Domain: Creating Subdomains 89 (the child server) but maintained by the same DNS administrator, you have defeated the purpose of delegation. Delegating implies delegating the workload of administration. If the only purpose of adding a new server is to handle some of the DNS workload and to back up the primary, a secondary server should be used; this was explained in Chapter 3.2.6, “Creating a Secondary DNS Server” on page 57. Delegating authority requires you to maintain a system of internal roots and to understand a more complicated setup. 5.1.4 Scenario Network Configuration As you can see in Figure 83, the network is similar to the network in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25 (see Figure 14 on page 27) with the exception that we added the subnet 10.1.1.0 with a subnet mask of 255.255.255.0. Hosts in this subnet belong to the domain OTHERDOMAIN, which is a subdomain of mycompany.com. Hosts in the 10.1.1.0 network have an absolute domain name of OTHERDOMAIN.mycompany.com. With Method 1 described in Section 5.1 on page 83, all hosts pictured in Figure 83 are configured in the AS1 primary name server. AS5 remains the secondary name server to AS1. With Method 2 described in Section 5.1 on page 83, the hosts located in the domain OTHERDOMAIN.mycompany.com are configured in a child DNS server called OTHERHOST (IP address 10.1.1.2). The majority of this chapter is devoted to the configuration steps necessary to implement Method 2. Figure 83. Detailed Network Diagram of mycompany.com domain 5.2 Task Summary The tasks required to complete this scenario do not include the TCP/IP configuration on the AS/400 system, nor does it include configuring the first DNS Router 10.1.1.0 mask:255.255.255.0 10.5.69.192 mask:255.255.255.192 10.5.62.0 mask:255.255.255.0 otherserver as5 as1 otherhost as2 p23gpb74 p23fym82 p23fzg16 otherprinter mycompany.com otherdomain.mycompany.com .221 .7 .211 .207 .222 .58 .187 .169 .9 .2 90 AS/400 TCP/IP DNS and DHCP Support server in the network. This scenario builds on what was already configured in Chapter 3. This chapter is divided into two major sections: Method 1 and Method 2. As we see in the planning section, Method 1 versus Method 2 is really an either/or situation. A DNS administrator must choose which one of the two methods to use. For clarity, we divide the task summary into two main sections: one for Method 1 and one for Method 2. Most of this chapter is devoted to the configuration for Method 2 because it is more complicated. Overall Task Summary • Planning to subdomain: This section describes the subdomain two methods as they pertain to DNS configuration. This section also covers the consequences of choosing Method 1 and later changing to Method 2. • Method 1: 1. Configure the AS1 primary name server. 2. Configure the AS5 secondary name server. • Method 2: 1. Configure AS1 as internal root server. 2. Remove the configuration for Method 1 from the AS1 primary name server. 3. Configure AS1 to delegate the OTHERDOMAIN.mycompany.com subdomain. 4. Configure AS1 to delegate the 1.1.10.in-addr.arpa subdomain. 5. Configure OTHERHOST to be authoritative for the primary domains OTHERDOMAIN.mycompany.com and 1.1.10.in-addr.arpa. 6. Configure the internal root server for OTHERHOST to be AS1. 7. Reconfigure hosts located in the OTHERDOMAIN.mycompany.com subdomain. 8. Verify the configuration for Method 2 with nslookup. 9. Configure changes to the AS5 secondary name server. 10.Configure OTHERHOST to be the OTHERDOMAIN.mycompany.com’s mail server. 11.Configure changes to OTHERHOST DNS server. 12.Configure changes to OTHERHOST TCP/IP and SMTP configuration. 13.Configure changes to AS1 (mycompany.com’s mail server) TCP/IP configuration. 14.Explain Round Robin/Address Sorting. 5.3 Planning to Subdomain When adding a subdomain within a domain, you need to decide as early as possible whether the subdomain will be maintained in the zone of authority of the primary domain or if the authority of this subdomain should be delegated and an additional zone of authority created. Growing Your Domain: Creating Subdomains 91 5.3.1 Defining the Zone of Authority The importance of planning when getting ready to subdomain is best explained by an example. Let’s imagine that the OTHERDOMAIN subdomain has just been added to the domain of mycompany.com. And for the sake of this example, let’s imagine that the OTHERDOMAIN.mycompany.com subdomain contains 100 hosts. With Method 1 where the subdomain OTHERDOMAIN.mycompany.com is included in the mycompany.com’s zone of authority, the DNS administrator needs to add an A record for each of the 100 hosts. This is added into the forward mapping file of mycompany.com on the primary server AS1. See Figure 84. Figure 84. Method 1’s Network Diagram Showing mycompany.com’s Zone of Authority Later in time, the DNS administrator decides to delegate the authority of the OTHERDOMAIN.mycompany.com subdomain to another server for a second DNS administrator to maintain (earlier in the chapter, we described this as Method 2). This requires two zones of authority. The first zone of authority is mycompany.com and does not include any hosts belonging in the OTHERDOMAIN.mycompany.com subdomain. The second zone of authority is the OTHERDOMAIN.mycompany.com’s zone of authority. This zone contains all the hosts in the OTHERDOMAIN.mycompany.com’s subdomain. See Figure 85. Router 10.1.1.0 mask:255.255.255.0 10.5.69.192 mask:255.255.255.192 10.5.62.0 mask:255.255.255.0 otherserver as5 as1 otherhost as2 p23gpb74 p23fym82 p23fzg16 otherprinter .221 .7 .211 .207 .222 .58 .187 .169 .9 .2 .205 NTserver1 p23thkpl .204 mycompa 92 AS/400 TCP/IP DNS and DHCP Support Figure 85. OTHERDOMAIN Subdomain as a Second Zone of Authority To go from Method 1 to Method 2, you need to follow these steps: 1. The parent DNS server (AS1) must be configured to be aware of the child DNS server. We describe how to do this later in the chapter. 2. The new child server that now contains the primary domain file for the OTHERDOMAIN.mycompany.com subdomain needs to have all 100 A records added for the 100 hosts within OTHERDOMAIN.mycompany.com subdomain. Also, if OTHERDOMAIN.mycompany.com has a separate mail server other than the mail server in mycompany.com, an MX record must be added. 3. The parent server needs to have those 100 A records for hosts belonging to the OTHERDOMAIN.mycompany.com subdomain deleted. The work involved in step 3 can be avoided by deciding to use Method 2 from the beginning. Method 2 requires an additional server and an additional DNS administrator. If these requirements are satisfied, consider configuring for Method 2 while you are adding subdomains. It is less work to go directly to Method 2 when adding a subdomain than it is to configure Method 1 and later reconfigure for Method 2. 5.4 Method 1: Adding a Subdomain and Maintaining Authority In this section, we discuss how to add hosts in OTHERDOMAIN.mycompany.com to the mycompany.com primary DNS file on the primary DNS system AS1. The absolute domain name of these hosts is OTHERDOMAIN.mycompany.com. The configuration steps for Method 1 consist of simply adding new hosts to the primary DNS configuration except that when we specify the domain name of the host, it is OTHERDOMAIN.mycompany.com. As you can see in Figure 83 on page 89, the OTHERDOMAIN.mycompany.com subdomain consists of three hosts: OTHERSERVER, OTHERPRINTER, and OTHERHOST. We must add these three hosts to the primary domain of mycompany.com, and to the Router 10.1.1.0 mask:255.255.255.0 10.5.69.192 mask:255.255.255.192 10.5.62.0 mask:255.255.255.0 otherserver as5 as1 otherhost as2 p23gpb74 p23fym82 p23fzg16 otherprinter mycompany.com otherdomain.mycompany.com .221 .7 .211 .207 .222 .58 .187 .169 .9 .2 .205 NTserver1 p23thkpl .204 mycompany.com Growing Your Domain: Creating Subdomains 93 appropriate in-addr.arpa primary domain on the primary name server, AS1. In this scenario, these three hosts reside on a new network of 10.1.1.0 (subnet mask of 255.255.255.0) so we must also add a new primary domain of 1.1.10.in-addr.arpa on AS1. In Method 1, we perform most configuration steps on the primary name server AS1. As in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25, the secondary name server is AS5. The AS5 secondary server requires some configuration change that we cover in Section 5.4.2 on page 95. 5.4.1 Configure AS1 Primary Name Server The configuration steps to add the new subdomain hosts to the primary server AS1 are as follows: 1. Start Operations Navigator AS1’s DNS server configuration. 2. On the General tab of mycompany.com’s Properties page, make sure the Create and delete reverse mappings by default field is checked. This causes the 1.1.10.in-addr.arpa primary domain file to be automatically created and the PTR record to be automatically added in the 1.1.10.in-addr.arpa primary domain file when we manually add new hosts in the mycompany.com primary domain file. 3. Click OK. 4. Right click mycompany.com primary domain file. 5. Click New Host. 6. Click Add. 7. Enter a host name of OTHERSERVER.OTHERDOMAIN.mycompany.com (remember to include the trailing dot after com). 8. Enter OTHERSERVER’s first IP address of 10.1.1.7 9. Click OK. 10.Highlight the host otherserver.otherdomain.mycompany.com (right panel under Contents of DNS server - AS1.mycompany.com), right click on it, and select Properties. The General tab is displayed. We need to perform steps 9 through 13 for the host OTHERSERVER because it has two IP addresses, one in the 10.5.69.192 network and one in the 10.1.1.0 network. See Figure 83 on page 89. We want the AS1 name server to be aware of both OTHERSERVER’s IP addresses. 11.Click Add. 12.Enter the host OTHERSERVER’s second IP address: 10.5.69.207. 13.Click OK. Steps 9 through 13 are not necessary for the two remaining hosts in the OTHERDOMAIN.com. subdomain because each of those hosts only has one IP address. 14.Click OK to finish adding the new host. Note that the reverse mapping file of 1.1.10.in-addr.arpa has been automatically created. 15.Repeat steps 4 through 8 to add hosts OTHERHOST and OTHERPRINTER. Do not perform steps 9 through 13; click OK a second time to finish adding each new host. 16.Right click the 1.1.10.in-addr.arpa file. 17.Click Enable. 18.If the AS1 name server is already started, then click on the Update Server smart icon to restart the name server to pick up the changes. Or, if the name server is stopped, close the DNS configuration window to save the configuration. Right click DNS and click Start to start the DNS server if it is not already started. 94 AS/400 TCP/IP DNS and DHCP Support Figure 86, Figure 87, and Figure 88 show the contents of mycompany.com primary domain file, the 1.1.10.in-addr.arpa primary domain file, and the 69.5.10.in-addr.arpa primary domain file. Steps 1 through 18 changed the contents of these three files from what they were after doing the configuration steps in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25. Figure 86. mycompany.com Domain File After Adding OTHERDOMAIN.mycompany.com Hosts Figure 87. 1.1.10.in-addr.arpa Domain File After Adding OTHERDOMAIN.mycompany.com Hosts Growing Your Domain: Creating Subdomains 95 Figure 88. 69.5.10.in-addr.arpa Domain File After Adding OTHERDOMAIN.mycompany.com Hosts 5.4.2 Configure the Secondary Name Server As5 The secondary name server must reflect the changes made to the primary domain files on the primary name server AS1. mycompany.com Domain File In Section 5.4.1, “Configure AS1 Primary Name Server” on page 93, we added hosts to the primary forward mapping domain file mycompany.com. In Section 3.2.6, “Creating a Secondary DNS Server” on page 57, we configured the secondary forward mapping domain file mycompany.com on the secondary name server AS5. Since mycompany.com is already configured on the secondary name server the changes we made to mycompany.com on AS1 are picked up by AS5 in the next zone transfer. New 1.1.10.in-addr.arpa File In Section 5.4.1, “Configure AS1 Primary Name Server” on page 93, a new reverse mapping primary domain file is automatically created for us on AS1: the 1.1.10.in-addr.arpa file. This is because the new hosts we added to mycompany.com are located on a network that is new to the AS1 DNS server. Since this is a new primary domain file, we need to configure this domain file as a secondary domain file on the secondary name server AS5 so this new primary domain can be zone transferred. The steps on how to add a new secondary domain are covered in Section 3.2.6.1, “Configuring the Secondary Server AS5” on page 57. Now we must configure a new secondary domain on AS5, 1.1.10.in-addr.arpa, and the IP address of the master name server remains the IP address of AS1, 10.5.69.222. Refreshing the Secondary Name Server As5 With the default secondary server refresh interval of three hours (see Section 3.2.6.4, “Controlling Zone Transfer Frequency” on page 60 for an explanation of this timer), the AS5 secondary name server picks up the changes to AS1 within three hours, assuming that the primary name server AS1 is started. If the DNS administrator determines that three hours is too long to wait for the refresh of the secondary name server files, a zone transfer can be forced by running the Update Server function or stopping and starting the AS5 secondary name server. Tip 96 AS/400 TCP/IP DNS and DHCP Support 5.5 Method 2: Adding a Subdomain and Delegating Authority This section discusses how to group hosts in OTHERDOMAIN.mycompany.com into a separate primary domain file and how to administer the subdomain on a different DNS server called a child server. By doing this, you are creating a second zone of authority that includes OTHERDOMAIN.mycompany.com. The original zone of authority is mycompany.com, but, unlike Method 1, in this section, the mycompany.com’s zone of authority does not include the OTHERDOMAIN.mycompany.com subdomain. However, keep in mind that OTHERDOMAIN.mycompany.com is still a subdomain of mycompany.com regardless of what method you use. 5.5.1 Configuring AS1 as Internal Root In this method, you are creating an independent name space that contains information about your company. The assumption is that the mycompany.com domain is growing into multiple subdomain and zones with multiple name servers authoritative for each zone. The first step is to establish an internal root name server or internal root. The internal roots contain delegation to your main forward mapping domains and in-add.arpa domains. Internal roots allow your internal name servers to find each other. An internal root delegates to any internal domain. In method 2, the parent name server (AS1), delegates OTHERDOMAIN.mycompany.com to the child name server, otherhost.OTHERDOMAIN.mycompany.com. Therefore, the internal root only needs to delegate to the parent server. If you have multiple subdomains and multiple zones administered by multiple name servers that are not in a parent/child relationship, the internal root delegates directly to any domain that you administer. See Chapter 15 in DNS and BIND by Albitz & Liu for more information. Any internal name server can run an internal root and be an authoritative name server for other zones. In this scenario, AS1 runs the internal root and, at the same time, is the primary DNS server for mycompany.com. Perform the following steps on AS1 to configure the internal root name server: 1. From Operations Navigator DNS Server Configuration right click Primary Domains. 2. Select New Primary Domains. 3. Select General tab. 4. In the Domain name field enter a dot (.). You can use internal root name servers if you don’t need to make the Internet name space available to your users. Internal root name servers allow name servers all over your company to locate and query each other. However, creating internal roots creates an independent name space for your company and are recommended for large networks, with no Internet connectivity. Tip Growing Your Domain: Creating Subdomains 97 5. Leave the box Create and delete reverse mappings by default unchecked. 6. Select the Secondary Name Servers tab. 7. Add the internal domain names and corresponding host name of the name server authoritative for each internal domain. In this scenario, you only need to add the parent domains. The internal root must delegate also the in-addr.arpa domains. Add the internal domains under AS1 zone of authority as shown in Figure 89. Click OK. Figure 89. Internal Root Delegating Internal Domains 8. Right click on the internal root primary domain represented by a dot ("."). 9. Select New Host. 10.Enter the host name and IP address of the parent name server: as1.mycompany.com 10.5.69.222 Click OK. Figure 90 shows the parent name server configuration in the internal root. Remember, here you are not configuring secondary DNS servers; you are actually delegating internal subdomains to internal name servers. The Secondary Name Serves tab is the Operations Navigator interface to add NS records to the DNS configuration file. Tip 98 AS/400 TCP/IP DNS and DHCP Support Figure 90. Internal Name Server in Internal Root 5.5.2 Removing Subdomain Configuration from the Parent Server AS1 If you initially used Method 1 to create subdomain OTHERDOMAIN.mycompany.com and maintain authority on AS1’s DNS server, you need to reverse the configuration steps outlined in Method 1 from AS1’s DNS primary domain files: mycompany.com, 1.1.10.in-addr.arpa and 69.5.10.in-addr.arpa files. Because we delegate authority of the OTHERDOMAIN.mycompany.com’s subdomain to the child server OTHERHOST, the host records for hosts in the OTHERDOMAIN.mycompany.com subdomain no longer belong on AS1’s DNS server (the parent) but belong on OTHERHOST’s DNS server (the child). Perform the following steps on AS1, the parent server: 1. Start AS1’s DNS server configuration in Operations Navigator. 2. Double-click Primary Domains. 3. Click on mycompany.com primary domain file to highlight it. The list of hosts configured within this file appear in the right window. 4. Right click mycompany.com. 5. Click Properties. 6. Confirm that Create and delete reverse mapping by default is enabled; check it. 7. Click OK. 8. Double-click on the mycompany.com primary domain file. 9. Right click OTHERHOST.OTHERDOMAIN.mycompany.com. 10.Click Delete to remove this host from the mycompany.com primary domain file. See Figure 91. Growing Your Domain: Creating Subdomains 99 Figure 91. Deleting OTHERHOST host From mycompany.com Primary Domain File 11.Click on Yes to confirm the delete operation. The host OTHERHOST is removed from the mycompany.com file and the PTR record in the 1.1.10.in-addr.arpa file for OTHERHOST host is automatically deleted. 12.Repeat steps 9 through 11 to delete the remaining OTHERDOMAIN hosts contained in the mycompany.com primary domain file: OTHERSERVER and OTHERPRINTER. 13.Notice that the 1.1.10.in-addr.arpa is empty but the file still exists. If you click on this primary domain file to display its contents, you find that it now contains no records. However, since all the hosts residing on the 10.1.1.0 network are part of the OTHERDOMAIN.mycompany.com subdomain, we do not want this primary domain file on the parent server, AS1. Right click on 1.1.10.in-addr.arpa and click on Delete followed by Yes to confirm the delete.We are not finished with the AS1 configuration for Method 2 so do not close the DNS window just yet. Continue on to the next section. 5.5.3 Delegating the Subdomain on the Parent Server AS1 Perform the following steps on AS1 also, which is the primary DNS server for mycompany.com domain. We also refer to this name server as the parent server. 14.Right click the mycompany.com file and select Properties. On the General tab, disable the Create and delete reverse mappings by default. You do this by making sure the X is not in the check box. Do not click OK just yet. Note: The previous step is important. We are creating an A record for the child server in the subsequent steps and we do not want the corresponding in-addr.arpa file to be created on the parent server AS1. 15.Click the Secondary Name Servers tab (you still should be in mycompany.com’s Properties page). 16.Click Add. 17.Enter the domain of the child server: otherdomain.mycompany.com. (remember to include the trailing period after com). Do not use the default that is displayed in the window, which is mycompany.com. See Figure 92. 100 AS/400 TCP/IP DNS and DHCP Support Figure 92. Entering the Domain Name of the Child Server OTHERHOST 18.Click OK. 19.Enter the host name of the child server: OTHERHOST. See Figure 93. Figure 93. Entering Host Name OTHERHOST for mycompany.com’s NS Record 20.Click OK. 21.Click OK. You have just created an NS record for OTHERHOST.OTHERDOMAIN.mycompany.com on the parent server AS1. 22.Right click the mycompany.com primary domain file again. 23.Click New Host. The label on the Secondary Name Servers tab under mycompany.com’s properties can be a bit misleading. Entering domain and host names under this tab creates an NS record in the QIBM/UserData/OS400/DNS/mycompany.com.DB file. NS records are necessary for identifying other name servers. It is better if this tab is just labeled "Name Servers". In this step, we create an NS record for the purpose of delegating authority to a child DNS server, which is unrelated to the concept of a secondary name server. Tip Growing Your Domain: Creating Subdomains 101 24.Click Add. 25.Enter the Host Name of: OTHERHOST.OTHERDOMAIN.mycompany.com. (do not forget the trailing dot after com). 26.Enter the IP address of OTHERHOST: 10.1.1.2. 27.Click OK. 28.Click OK. You have just created an A record in AS1’s mycompany.com primary domain file for the host OTHERHOST.OTHERDOMAIN.mycompany.com. 29.Right click mycompany.com primary domain and select Properties. On the General tab, enable the Create and delete reverse mappings by default. Do this step only if you finished delegating subdomains. In this scenario, the OTHERDOMAIN.mycompany.com subdomain is the only subdomain we are delegating. If there are other subdomains to delegate, the previous steps must be repeated for the additional subdomains. Since we finished delegating our one subdomain OTHERDOMAIN.mycompany.com, we complete this step by ensuring that the check box Create and delete reverse mappings by default is enabled. 30.The DNS configuration on AS1 is not finished yet. Do not close the DNS server configuration window yet. Continue with the next section. The NS and A record that we have just created together allow the parent server, in this case AS1, to query OTHERHOST child server on behalf of a client that needs to resolve a host name for a host contained in the OTHERDOMAIN.mycompany.com subdomain. Another way of saying this is that the NS and A record allow the DNS server on AS1 to "look down" the DNS name space tree to find the server (OTHERHOST in our case) that has authority for the subdomain OTHERDOMAIN.mycompany.com. Figure 94 shows the contents of the forward mapping primary domain file mycompany.com.DB located in the /QIBM/UserData/OS400/DNS directory on AS1, the parent server. The MX record was created in Section 3.2.3.3 on page 48. What is of interest in Figure 94 is the NS record for OTHERHOST.OTHERDOMAIN.mycompany.com and the A record for OTHERHOST. 102 AS/400 TCP/IP DNS and DHCP Support Figure 94. Contents of /QIBM/UserData/OS400/DNS/MYCOMPANY.COM.DB File on AS1 Figure 95 shows the boot file in the parent name server AS1. Notice that this server runs also the internal root name server (.). Figure 95. BOOT file /QIBM/UserData/OS400/DNS/BOOT File on AS1 5.5.4 Delegating the In-Addr.Arpa File on the Parent Server AS1 In this scenario, we decided that the 1.1.10.in-addr.arpa primary domain file will be maintained on child server OTHERHOST. To do this, we need to delegate the reverse mapping file to the child server also. The way to do this is to subnet the 1.1.10.in-addr.arpa file. First we explain the steps to do this on the parent server and give a further explanation about why we have to do this. Growing Your Domain: Creating Subdomains 103 Complete the following steps on the parent server AS1: 1. Right click on the label Primary Domains under the title of DNS Server-as1.mycompany.com. 2. Click on New Primary Domain. See Figure 96. Figure 96. Creating the Two-Byte Primary Domain FIle 1.10.in-addr.arpa on AS1 Under the General Tab’s Domain Name field, type in 1.10.in-addr.arpa. and leave the Create and delete reverse mappings by default check box unchecked. See Figure 97. It is important to notice that in this section, we configure a 2-byte in-addr arpa file, 1.10.in-addr.arpa on AS1. This is a different primary domain file than the 3-byte in-addr.arpa file, 1.1.10.in-addr.arpa file that is created on the child server OTHERHOST. Right now, this may be a little confusing so just be aware that 1.10.in-addr.arpa is a different primary domain from the 1.1.10.in-addr.arpa primary domain. This is similar in concept to saying that the mycompany.com primary domain is a different primary domain from the OTHERDOMAIN.mycompany.com that we configure later on in the child server OTHERHOST. Note 104 AS/400 TCP/IP DNS and DHCP Support Figure 97. Entering the Domain Name when Creating the 1.10.in-addr.arpa Primary Domain File 3. Click OK. The new primary domain file of 1.10.in-addr.arpa should be displayed in the list of primary domain files. 4. Right click the 1.10.in-addr.arpa primary domain file. 5. Click Properties. 6. Click the Secondary Name Server Tab. 7. Click Add. 8. In the domain name field, enter the domain name of the three byte in-addr.arpa domain: 1.1.10.in-addr.arpa. (do not forget the trailing period after com). See Figure 98. Figure 98. Entering 1.1.10.in-addra.arpa’s Domain Name for Secondary Name Server in 1.10.in-addr.arpa File 9. Click OK. 10.In the host field, enter OTHERHOST.OTHERDOMAIN.mycompany.com. (Do not forget the trailing dot after com.) 11.Click OK. 12.Click OK. 13.Right click the 1.10.in-addr.arpa file. Growing Your Domain: Creating Subdomains 105 14.Click Enable to enable this primary domain. If a client sends a reverse lookup query to the parent server AS1 (for example, the client knows the IP address 10.1.1.9 and is querying AS1 for corresponding host name), the name server on AS1 uses the NS record we just created within 1.10.in-addr.arpa to find out that the authority for the 1.1.10.in-addr.arpa resides on the child server OTHERHOST. AS1 either tells the client to go query the OTHERHOST name server for the answer or AS1 will query the OTHERHOST DNS server on the client’s behalf. Continuing on.....: We now need to add the OTHERSERVER host to the 69.5.10.in-addr.arpa primary domain file on the parent server AS1. Although the OTHERSERVER host is in the OTHERDOMAIN subdomain that the child server OTHERHOST is authoritative for, the parent server AS1 is authoritative for the entire 69.5.10.in-addr.arpa primary domain file. Because OTHERSERVER has an IP address in the 10.5.69.192 network, this host must be added into the 69.5.10.in-addr.arpa file on the parent server AS1. See Figure 83 on page 89 to review the network diagram and IP addresses. 15.Right click the 69.5.10.in-addr.arpa primary domain file. 16.Select New Host. 17.Click Add. 18.Enter host name: OTHERSERVER.OTHERDOMAIN.mycompany.com. (do not forget the trailing period after com). 19.In the same window, enter the IP address of 10.5.69.207. 20.Click OK. 21.Click OK. 22.Close the DNS server configuration window to save the configuration on the parent server, AS1. Figure 99 shows the contents of the 2-byte 1.10.in-addr.arpa primary domain file on AS1. This file is in the IFS directory /QIBM/UserData/OS400/DNS. Unlike the 3-byte in-addr.arpa files, this file does not contain any PTR records. The purpose of this file is to contain an NS record that points to the child name server OTHERHOST. The child name server OTHERHOST contains the 3-byte 1.1.10.in-addr.arpa primary domain file for the network 10.1.1.0 and that is the in-addr.arpa file that contains the PTR records for the hosts in this network. AS1’s 1.10.in-addr.arpa file is what the AS1 name server uses to "find" the name server (OTHERHOST) authoritative for the 1.1.10.in-addr.arpa domain. At this point, you may be wondering "what did we just do?" We created a new primary domain file but we did not add any hosts to it. If we click on the 1.10.in-addr.arpa file, we see that it is empty of any records. In steps 5 through 12, we created an NS record specifying the child server OTHERHOST for the 1.1.10.in-addr.arpa primary domain file. Essentially this is telling the parent server AS1 that the primary reverse mapping file of 1.1.10.in-addr.arpa exists on the child server OTHERHOST. You can think of the NS record in this case as a pointer to help the parent server, AS1, find its way down the DNS name space tree to the server that is authoritative for the 1.1.10.in-addr.arpa domain, which is the child server OTHERHOST. In conclusion, we just delegated the 1.1.10.in-addr.arpa domain to the child name server OTHERHOST. Tip 106 AS/400 TCP/IP DNS and DHCP Support Figure 99. Contents of 1.10.in-addr.arpa Domain File on AS1 Summary of Method 2 Configuration The configuration of the delegation of domain OTHERDOMAIN and the 1.1.10.in-addr.arpa domain on the parent server AS1 is complete. We still need to perform additional configuration changes on the child server,OTHERHOST, which we cover in the next section. Figure 100 shows the list of primary domain files (in the left column of the Operations Navigator display) residing on AS1 parent server. Notice the 1.10.in-addr.arpa primary domain. Figure 100 also shows the contents of the mycompany.com primary domain file (in the right column of the Operations Navigator display). Figure 100. Contents of AS1’s mycompany.com Primary Domain File 5.5.5 Configuring the Child Server Otherhost We are still configuring our name servers for Method 2. In the last few sections, all the configuration took place on the parent server AS1. We are finished configuring Method 2 on AS1 but we still need to configure the child server, OTHERHOST, to be authoritative for the OTHERDOMAIN.mycompany.com domain. The steps to do this are similar to the configuration steps completed in Method 1 of this chapter except that these steps are performed on the child server OTHERHOST instead of the AS1’s name server as we did in Method 1. If DNS has never been configured on OTHERHOST, the DNS configuration wizard starts. Perform the following steps on OTHERHOST, the child name server. We are assuming a DNS configuration does not exist on this AS/400 system yet. 1. Open Operations Navigator and double-click on OTHERHOST.OTHERDOMAIN.mycompany.com. 2. Click + next to Network. Growing Your Domain: Creating Subdomains 107 3. Click + next to Servers. 4. Click + next to OS400. 5. Double-click DNS under the Server Name column. 6. The DNS server configuration wizard starts the first time you enter DNS server configuration. Click Next. 7. Click Next to bypass the Root Server window. We are configuring a root server on this child name server but we do this later in this chapter. For now, bypass the root server window. 8. Enter the name of the primary domain we are configuring: OTHERDOMAIN.mycompany.com. 9. Click Next. 10.The next window gives you an opportunity to add IP address and host names. We need to configure only the local host for the loopback address of 127.0.0.1 with the wizard. Click Add. 11.Enter localhost as the host name. (We really want you to type the word localhost here; do not type the TCP/IP host name of the AS/400 system you are configuring on.) 12.Enter 127.0.0.1 as the IP address. 13.Click OK. 14.Click Finish. Add the remaining IP addresses and host names outside of the Wizard DNS server configuration. 15. The DNS server configuration with wizard is complete. Continue with Step 16. Note: If this is not the first time you are configuring DNS with Operations Navigator, the DNS server configuration wizard is not shown. In this case, use the following unnumbered steps and then proceed with Step 16. Use Operations Navigator to go into the OTHERHOST’s DNS server configuration. • Right click on the Primary Domain under DNS server-OTHERHOST.mycompany.com. • Click New Primary Domain. • In the window that follows, ensure that the Domain field is:OTHERDOMAIN.mycompany.com. Assuming that this AS/400 system is configured with that domain name (you can check with the AS/400 CFGTCP command, press Enter, take Option 12, and check the Domain name listed). This should be the default domain in this window. If it is not, type in OTHERDOMAIN.mycompany.com for the Domain Name. • In the same window, check the box Create and delete reverse mappings by default to enable this option. • Click OK. • Right click OTHERDOMAIN.mycompany.com. • Click New Host. • Click Add. • Enter the host name localhost. (We really want you to type in the word localhost here.) • Enter the IP address 127.0.0.1. • Click on OK. 16.Up to this point, we created (either by using the DNS server configuration wizard or the preceding unnumbered steps) the primary domain forward mapping file of OTHERDOMAIN.mycompany.com on OTHERHOST’s name server. Right click on OTHERDOMAIN.mycompany.com. 17.Select Properties. 108 AS/400 TCP/IP DNS and DHCP Support 18.Ensure that Create and Delete reverse mappings by default is enabled. 19.Click OK. 20.Right click the OTHERDOMAIN.mycompany.com primary domain file. 21.Click Enable. The following steps add the new hosts (A records) to the OTHERDOMAIN.mycompany.com primary domain file: 22.Right click on the OTHERDOMAIN.mycompany.com primary domain file. 23.Click New Host. 24.Click Add. 25.Enter the host name otherhost.otherdomain.mycompany.com. (do not forget the trailing period after com). 26.On the same window, enter the IP address: 10.1.1.2. See Figure 101. Figure 101. Adding OTHERHOST Host to the OTHERDOMAIN Primary Domain File on Child Server 27.Click OK. 28.Click OK. Notice that the 1.1.10.in-addr.arpa primary domain file is automatically created. At this point, it contains one PTR record for the OTHERHOST host. By the time we finish with these steps, this file will also contain the PTR records for OTHERSERVER host and OTHERPRINTER host. 29.Repeat steps 22 through 28 to add New Host OTHERPRINTER with an IP address of 10.1.1.9. 30.Repeat steps 22 through 28 again to add New Host OTHERSERVER with an IP address of 10.1.1.7. 31.Right click on the 1.1.10.in-addr.arpa primary domain file. 32.Select Enable to enable the 1.1.10.in-addr.arpa file. 33.Double-click OTHERHOST.OTHERDOMAIN.mycompany.com file. The contents of this file is displayed in the right window. 34.Right click host OTHERSERVER. 35.Select Properties. 36.Click Add. 37.Enter OTHERSERVER’s second IP address: 10.5.69.207. See Figure 83 on page 89 if you need to refresh your memory on the network diagram and the IP addressing used. Growing Your Domain: Creating Subdomains 109 38.Right click 69.5.10.in-addr.arpa file on the child server OTHERHOST. 39.Click Delete. 40.Click Yes to confirm the Delete. The OTHERDOMAIN.mycompany.com primary domain should look similar to Figure 102. 41.We need to perform one more configuration step on the child server, OTHERHOST, in the next section so do not close the DNS window to save the configuration just yet. Figure 102. OTHERDOMAIN.mycompany.com Primary Domain on the Child Server OTHERHOST We almost finished the child server configuration. The next section explains how to configure the child server to resolve queries for domains above itself in the DNS name space tree: the root server configuration. 5.5.6 Internal Root Server Configuration on the Child Server In Section 5.5.3, “Delegating the Subdomain on the Parent Server AS1” on page 99, we configured an A record and an NS record for the child server on the parent server. This was so that if the parent server receives a query for information that the child server is authoritative for, the parent server knows how to "look down" the DNS name space tree to find the server authoritative for the primary domain the query was for. The root server configuration on the child server allows the child server to look to the top of the name space tree when it receives a query for information in the mycompany.com zone of authority that is above it in the tree structure. Section Because we added a second IP address to OTHERSERVER and because Create and delete reverse mappings by default was enabled, a new primary domain file was created: 69.5.10.in-addr.arpa. It contains one PTR record for OTHERSERVER. If you remember the DNS configuration on the parent server AS1, you remember that a 69.5.10.in-addr.arpa primary domain file exists on that server already. The same PRIMARY domain file cannot exist on two different servers. If the same file exists on another server, it must exist under a secondary domain configuration (and the 69.5.10.in-addr.arpa backup file does exist on the secondary name server AS5 as configured in Chapter 3). The primary domain file of 69.5.10.in-addr.arpa on the child server OTHERHOST must be deleted on the child server OTHERHOST. Note 110 AS/400 TCP/IP DNS and DHCP Support 5.5.1, “Configuring AS1 as Internal Root” on page 96, shows how to configure the internal root name server for the internal name space. Perform the following steps on the child server OTHERHOST: 1. Start the DNS server configuration in Operations Navigator and right click on DNS Server-OTHERHOST.OTHERDOMAIN.mycompany.com. 2. Click Properties. 3. Select the Root Servers tab. 4. Click Add. 5. Enter the host name of the parent server: AS1.mycompany.com. (do not forget the trailing period after the com). 6. On the same window, enter the IP address: 10.5.69.222. See Figure 103. Figure 103. Adding a Root Name Server to the Child Server OTHERHOST 7. Click OK. 8. Click OK. 9. Close the DNS window to save the configuration. 5.5.7 Reconfigure the Otherdomain Clients If the hosts in OTHERDOMAIN.mycompany.com are configured with a domain name of OTHERDOMAIN.com prior to changing OTHERDOMAIN to be a subdomain of mycompany.com, you must change the hosts’ domain names. Hosts located within OTHERDOMAIN.mycompany.com no longer should have a domain name of OTHERDOMAIN.com but should now have a domain name of OTHERDOMAIN.mycompany.com. Updating the hosts’ domain names to be OTHERDOMAIN.mycompany.com is necessary for both Method 1 and Method 2 because in both methods, OTHERDOMAIN became a subdomain of mycompany.com. The difference between Method 1 and Method 2 involved the zone of authorities. For example, in our scenario OTHERHOST is an AS/400 system. To change its domain name use the AS/400 command: CFGTCP Growing Your Domain: Creating Subdomains 111 Then use Option 12. Retype the domain name: OTHERDOMAIN.mycompany.com. Several AS/400 TCP/IP applications (SMTP included) require the local AS/400 system to be listed in the TCP/IP host table with both the short name (that is, OTHERHOST) and the long name (OTHERHOST.OTHERDOMAIN.mycompany.com). Do not forget to use the AS/400 command: CFGTCP Then use Option 10 to update the long name of OTHERHOST.OTHERDOMAIN.com to OTHERHOST.OTHERDOMAIN.mycompany.com. 5.5.8 Verifying DNS with Name Server Lookup The AS/400 Name Server Lookup (nslookup) queries a name server through a "green screen" interactive mode. In this section, we use nslookup to query the parent server AS1 and the child server OTHERHOST to verify these name servers are answering the queries and giving the responses we expect. To enter the nslookup interactive mode, enter the following command on the AS1 command line: call pgm(qdns/qtoblkup) The result of this command is shown in Figure 104: Figure 104. Entering Nslookup Interactive Mode Figure 104 shows nslookup displaying the default server of AS1, which indicates that the server that nslookup queries by default is the AS1 name server. The default type of query that nslookup uses is an A record query (that is, have the host name need an IP address). We can query for the IP address of the host NTserver1. By entering NTserver1 on the command line, we are querying AS1 name server’s A record for NTserver1. Nslookup also adds a default domain name to the NTserver1 host name that we entered. The default domain name is mycompany.com, which is correct for the host NTserver1. The result is shown in Figure 105. You can see that name server AS1 supplied nslookup with NTserver1’s IP address of 10.5.69.205. Press ENTER to end terminal session. Default Server: as1.mycompany.com Address: 10.5.69.222 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window 112 AS/400 TCP/IP DNS and DHCP Support Figure 105. Result of Nslookup Query for ntserver1 Next let’s query AS1 for an A record that the child server OTHERHOST is authoritative for, OTHERPRINTER. To do this, we enter OTHERPRINTER on the command line. Whoops.... nslookup caught us using the incorrect domain name for OTHERPRINTER. The result we get is "No A records found". This is because the query was made for OTHERPRINTER.mycompany.com, which is not correct. Next we enter the correct query: otherprinter.OTHERDOMAIN.mycompany.com and get the answer we expected: 10.1.1.9. Both queries and their results are shown in Figure 106. The AS1 name server responds with OTHERPRINTER’s IP address of 10.1.1.9. But AS1 is not authoritative for OTHERPRINTER’s domain. How did AS1 know the answer? AS1 queried the child server OTHERHOST for OTHERPRINTER’s IP address to respond to nslookup’s query. AS1 cached the answer. The next time AS1 is queried for the IP address of OTHERPRINTER, it will get the answer from its cache (assuming the cache has not timed out or the name server has not been stopped and started on AS1) and does not bother OTHERHOST. Figure 106. Querying for OTHERPRINTER and OTHERPRINTER.OTHERDOMAIN.mycompany.com > Press ENTER to end terminal session. Default Server: as1.mycompany.com Address: 10.5.69.222 > > ntserver1 Server: as1.mycompany.com Address: 10.5.69.222 Name: ntserver1.mycompany.com Address: 10.5.69.205 > ===> > > otherprinter Server: as1.mycompany.com Address: 10.5.69.222 *** No address (A) records available for otherprinter > > otherprinter.otherdomain.mycompany.com Server: as1.mycompany.com Address: 10.5.69.222 Name: otherprinter.otherdomain.mycompany.com Address: 10.1.1.9 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window Growing Your Domain: Creating Subdomains 113 To submit a reverse mapping query, which is to supply the IP address and ask the name server to respond with the host name, we need to change to a query type of PTR within nslookup. First, we issue the nslookup command: set type=ptr Second, we issue the command 10.5.69.221 to query the AS1 name server for 10.5.69.221’s host name. The result is shown in Figure 107. Let’s explain, line-by-line, what nslookup is displaying on the window: • > 10.5.69.221 - This is our query. What is to the right of the > symbol is what the user typed. • Server: as1.mycompany.com - This is the name server that nslookup queried. • Address: 10.5.69.222 - This is the IP address of the name server. • 221.69.5.10.in-addr.arpa name = as5.mycompany.com - This is the answer to our query answer. nslookup lists the absolute in-addr.arpa domain name of AS5, along with the fully qualified host name of AS5.mycompany.com. • 69.5.10.in-addr.arpa nameserver=as1.mycompany.com - This is the name of the primary domain file that the answer was located in. This line also contains the name of the name server authoritative for the primary domain file. • as1.mycompany.com internet address=10.5.69.222. - This is the fully-qualified name and IP address of the name server authoritative for the domain file the answer was located in. Figure 107. Nslookup Reverse Lookup Query for 10.5.59.221 Let’s now use nslookup to query the AS1 name server for a reverse lookup for an IP address that the child name server OTHERHOST is authoritative for. Remember the primary domain file of 1.1.10.in-addr.arpa resides on the child server OTHERHOST. This time, AS1 gives a non-authoritative answer along with where you can find the authoritative answer. See Figure 108. > > set type=ptr > > 10.5.69.221 Server: as1.mycompany.com Address: 10.5.69.222 221.69.5.10.in-addr.arpa name = as5.mycompany.com 69.5.10.in-addr.arpa nameserver = as1.mycompany.com as1.mycompany.com internet address = 10.5.69.222 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window 114 AS/400 TCP/IP DNS and DHCP Support Figure 108. Using Nslookup to Query AS1 for 10.1.1.7 Host Name What does it mean to get a non-authoritative answer? It means that at some time earlier, the AS1 name server got the reverse mapping information for 10.1.1.7 from the child server OTHERHOST and cached it. When we just now used nslookup to query for 10.1.1.7, the AS1 name server supplied us with the answer from its cache. Note that AS1 tells us where to find the authoritative answer, which is the child server, OTHERHOST. So let’s query the child server OTHERHOST for an authoritative answer for the reverse lookup of 10.1.1.7. We can do this right from the AS1’s session, but we need to tell Nslookup that we want to switch name servers. We switch to querying OTHERHOST by issuing the command: server otherhost.otherdomain.mycompany.com. Set the query type to PTR by entering the command: set type=ptr Then, enter the command: 10.1.1.7 The results of all three commands are shown in Figure 109. > > 10.1.1.7 Server: as1.mycompany.com Address: 10.5.69.222 Non-authoritative answer: 7.1.1.10.in-addr.arpa name = otherserver.otherdomain.mycompany.com Authoritative answers can be found from: 1.1.10.in-addr.arpa nameserver = otherhost.otherdomain.mycompany.com otherhost.otherdomain.mycompany.com internet address = 10.1.1.2 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window Growing Your Domain: Creating Subdomains 115 Figure 109. Querying OTHERHOST Using Nslookup on AS1 This time nslookup has an authoritative answer: name = otherserver.otherdomain.mycompany.com. This is the answer we are looking for. The 10.1.1.7 IP address belongs to OTHERSERVER. The answer came from the primary domain file of 1.1.10.in-addr.arpa that resides on the child server OTHERHOST.OTHERDOMAIN.mycompany.com. This is the name server that nslookup was using; therefore, the answer had to be authoritative. 5.5.9 Method 2’s Secondary Name Server AS5 In the past few sections, we made some major configuration changes from the name server configuration in Chapter 3’s scenario. Thus, we should be asking ourselves: what about a backup? We now have two name servers (the parent and the child) that contain primary domain files. How should we back them up? Backing Up the Parent Server AS1 In this scenario, using Method 2 to delegate the zone of authority to the child name server OTHERHOST, we made some changes to the parent name server’s primary domain files that already existed from Chapter 3’s scenario. We also created one new primary domain file on AS1: 1.10.in-addr.arpa. Except for the new primary domain file of 1.10.in-addr.arpa, the primary domain files on AS1 are already backed up on AS5, which is the secondary name server for AS1. AS5 was configured as a secondary name server in Section 3.2.6 on page 57. So the question comes up: do we need to create a new secondary domain file on the secondary name server AS5 for 1.10.in-addr.arpa? This domain file only contains an SOA record and one NS record. See Figure 99 on page 106 to review the contents of the 1.10.in-addr.arpa file. The answer to the question depends on how the child name server OTHERHOST is backed up. If the secondary name server to OTHERHOST is a different name server > > server otherhost.otherdomain.mycompany.com Default Server: otherhost.otherdomain.mycompany.com Address: 10.1.1.2 > > set type=ptr > > 10.1.1.7 Server: otherhost.otherdomain.mycompany.com Address: 10.1.1.2 7.1.1.10.in-addr.arpa name = otherserver.otherdomain.mycompany.com 1.1.10.in-addr.arpa nameserver = otherhost.otherdomain.mycompany.com > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window 116 AS/400 TCP/IP DNS and DHCP Support than AS5, the 1.10.in-addr.arpa domain file must add a secondary domain file on AS5, which is AS1’s secondary name server. If the secondary name server to the primary name server OTHERHOST is also AS5, then the secondary domain file of 1.1.10.in-addr.arpa exists on AS5 and there is no need for a secondary domain file 1.10.in-addr.arpa on AS5. Backing Up the Child Name Server Otherhost We configured two primary domain files on the child name server OTHERHOST: 1.1.10.in-addr.arpa and OTHERDOMAIN.mycompany.com. These primary domain files should be backed up on a secondary name server. We can use the name server on AS5 as the secondary name server to the child name server OTHERHOST as well as the parent name server AS1. Whether the secondary name server is AS5 or it is a different AS/400 name server, the steps are the same: 1. On the secondary name server, create two new secondary domains: OTHERDOMAIN.mycompany.com and 1.1.10.inaddr.arp are the names of the domains. The IP address of the master server is the IP address of the child name server 10.1.1.2. 2. On the child name server OTHERHOST, use the Properties’ Secondary Name Server tab on each of the two primary domains to specify the domain name of the domain to be backed up and the fully-qualified host name of the secondary name server. This step was explained in detail in Section 3.2.6.2 on page 58. When reviewing the OTHERDOMAIN.mycompany.com.DB file in Figure 113 on page 121, an NS record exists for AS5. This indicates we did make a decision to back up the child name server OTHERHOST with the secondary name server AS5. 5.6 Mail Between Otherdomain.mycompany.com and Mycompany.com Now that we separated the domain OTHERDOMAIN.mycompany.com into a second zone of authority from the mycompany.com domain, the question is what configuration changes we need to make, if any, for the purposes of delivering mail. The answer to that question depends on whether AS1 remains the only mail server in the network or if a second mail server will handle the mail for users in the OTHERDOMAIN.mycompany.com domain. 5.6.1 AS1 as the Only Mail Server in the Network The two zones of authority: mycompany.com and OTHERDOMAIN.mycompany.com are separated for the purposes of DNS only. The DNS server on AS1 is authoritative over the mycompany.com domain and the DNS server on OTHERHOST is authoritative over the OTHERDOMAIN.mycompany.com domain. All hosts on the three networks, 10.5.69.192, 10.5.62.0, and 10.1.1.0, have TCP/IP connectivity to each other and the mail administrator certainly may choose to have the AS1 mail server as the only mail server in the network. In other words, users in the OTHERDOMAIN.mycompany.com can have their POP3 client configured to have both their SMTP outgoing mail server and POP incoming mail server AS1.mycompany.com. Mail to this user is addressed to User@mycompany.com Growing Your Domain: Creating Subdomains 117 and it arrives on the user’s POP3 mailbox on AS1 as we explained in Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44. For the preceding example where AS1 remains the only mail server in the network, there is no need for any further DNS server configuration changes on the parent name server AS1 nor on the child name server OTHERHOST beyond what was already configured for Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44. The users in the OTHERDOMAIN.mycompany.com can have their PC configured to use OTHERHOST’s IP address as its DNS server. The name server on OTHERHOST resolves the mail server’s IP address, which is AS1’s IP address. In conclusion, even with the OTHERDOMAIN domain’s zone of authority delegated to the child server OTHERHOST, the mail configuration outlined in Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44 is the only mail configuration necessary if AS1 remains the only mail server for the network. All POP3 users need a POP3 directory entry on AS1 with their SMTP domain name equal to AS1.mycompany.com. Mail can be addressed to the user using either: SMTP_UserID@mycompany.com or SMTP_UserId@AS1.mycompany.com Both of the previous "mail to:" addresses allow mail to be delivered to the AS1 mail server. 5.6.2 Otherhost as the Mail Server for Otherdomain.mycompany.com If OTHERDOMAIN.mycompany.com has its own mail server, and assuming that, for example, OTHERHOST is that mail server, we need to make configuration changes to OTHERHOST and also to the mail server AS1. 5.6.2.1 Mail Configuration on Otherhost Mail Server The mail configuration required on OTHERHOST is similar to the mail configuration outlined in Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44. 1. The POP3 user needs a user profile and POP3 directory entry on the OTHERHOST AS/400 system. It is important when configuring the SMTP domain name for each user to make it equal to OTHERHOST.OTHERDOMAIN.mycompany.com. See Chapter 3.2.3.1, “Configuring a POP3 User on AS1” on page 45 for details. 2. We need to update the DNS configuration on the OTHERHOST child name server. The following steps are similar to what was outlined in Chapter 3.2.3.3, “Configuring the Domain’s Mail Server in the DNS Server” on page 48. However, this time, we perform the following DNS configuration steps outlined on the child server OTHERHOST. • Configure a wildcard MX entry. Start OTHERHOST’s DNS server configuration in Operations Navigator. 1. Double-click Primary Domains. 2. Right click on OTHERDOMAIN.mycompany.com. 3. Right click on Properties. 4. Click on Mail tab. 5. Click Add. 6. Take the default domain presented in the window: *.OTHERDOMAIN.mycompany.com. 118 AS/400 TCP/IP DNS and DHCP Support 7. Click OK. 8. Enter the host name of the mail server: OTHERHOST.OTHERDOMAIN.mycompany.com. (Use the fully-qualified host and domain name. Also, do not forget the trailing dot after com.) 9. Click OK. 10.Click OK. • Verify that the mail server is listed as a host in the primary domain file of OTHERDOMAIN.mycompany.com. It is, in this case. We added OTHERHOST as a new host in OTHERDOMAIN.mycompany.com earlier in this chapter. • Verify that the mail server is listed as a host in the 1.1.10.in-addr.arpa primary domain file. It is; it was automatically added when we added OTHERHOST as a new host earlier in this chapter. • Close the DNS window to save the configuration. Or, if the DNS server is already active, click on the Update Server smart icon to reload the DNS configuration changes while the DNS server continues to be active. 3. Check the TCP/IP and SMTP Configuration on OTHERHOST. We need to verify the TCP/IP domain information and SMTP attributes. These steps are similar to the steps outlined in Chapter 3.2.3.4, “Verifying the TCP/IP and SMTP Configuration on AS1” on page 50 except that this time, the steps are performed on OTHERHOST. • Use the CFGTCP command, option 12 to verify that the Search First is *LOCAL. • On the same display, verify that the Internet address is 10.1.1.2, which is the IP address of the local AS/400 OTHERHOST itself. The SMTP server running on OTHERHOST first searches the local host table to determine where to deliver the mail and if it does not find what it needs in the local host table on the AS/400 system, it queries the DNS at IP address 10.1.1.2 (OTHERHOST). • Use the CFGTCP command, option 10 (on OTHERHOST system) to make sure that the host OTHERHOST is listed, has an Internet address of 10.1.1.2, and has an entry with a host name of OTHERDOMAIN.mycompany.com (do not put a period at the end of com when using CFGTCP option 10). • Use CFGTCP option 10 (on OTHERHOST system) to make sure that the host AS1 is listed with an IP address of 10.5.69.222 and has an entry with a host name of mycompany.com. • Use the CHGSMTPA command followed by F4 (to prompt) and page down once to check on the Mail Router and Firewall parameters. Mail Router should be equal to *NONE. The Firewall parameter should be set to *NO. 4. Add host name to the AS1 system’s local host table. • On the AS1 system, issue CFGTCP option 10. Make sure the host OTHERHOST is listed with an Internet address of 10.1.1.2 and has an entry with a host name of OTHERDOMAIN.mycompany.com. 5. Make sure that SMTP, POP, and QMSF jobs are active on both mail servers OTHERHOST and AS1. • Check the status of the SMTP server jobs, the POP server jobs, and the QMSF job or jobs. All jobs should be running under the QSYSWRK subsystem. Then use the WRKACTJOB SBS(QSYSWRK) command to view all active jobs in the QSYSWRK subsystem. You may have to page down several times to find the jobs we are looking for. • The SMTP server is active if the following four jobs are active: QTSMTPBRCL QTSMTPBRSR Growing Your Domain: Creating Subdomains 119 QTSMTPCLNT QTSMTPSRVR If these jobs are not active, you can start the SMTP server with the command: STRTCPSVR *SMTP. If the previous jobs are active and you made changes to CFGTCP option 10, option 12, or with the CHGSMTPA command, you must end and start the SMTP applications for the changes to take effect. • The POP server is active if at least one job is active with the name QTPOPxxxxx, where xxxxx is any number. For example: QTPOP00595 QTPOP00597 QTPOP00637 QTPOP00653 If at least one QTPOPxxxxx is not active, then start the POP server with the command: STRTCPSVR *POP. • Make sure at least one QMSF job is active under the QSYSWRK subsystem. The job name is QMSF. If a QMSF job is not active, then start one with the command: STRMSF. Starting the DNS server on OTHERHOST: • If the DNS server is not started on OTHERHOST, the following AS/400 command will start it: STRTCPSVR *DNS • Once the DNS server is active, a job named QTOBDNS starts in the QSYSWRK subsystem. Its job log should be the first place the DNS administrator looks if there is a problem. The job runs under the user profile QTCP. If the QTOBDNS job has ended, use the WRKSPLF QTCP command and then F18 to go to the bottom of the list of job logs. This should help you locate the QTOBDNS spooled job log. As a child name server queries a parent name server (or vice versa) to get responses for queries that it is not authorized for, it caches the response. Thus, if the child name server is queried for the same information again, the name server will give the answer out of its cache instead of querying the parent server again. However, if the child server is stopped and started, it clears its cache. Therefore, it is beneficial to try to minimize the number of times the name server is stopped and started once it is in production. If configuration changes need to be made to the name server while it is active, the recommended method is to use Operations Navigator and the Update Server smart icon to load the configuration changes while the name server is still active. This allows the name server to keep its cache rich with information. TIP 120 AS/400 TCP/IP DNS and DHCP Support 5.7 The Child Server Otherhost’s IFS Directory Files Let’s display the contents of the files within the child server OTHERHOST’s IFS directory that were created or altered by Method 2’s configuration. First, let’s list the files in /QIBM/UserData/OS400/DNS that we do not display in this section. • The 0.0.127.in-addr.arpa file only contains the localhost host PTR record. • The ATTRIBUTES file is automatically created when the DNS OS/400 option is installed on OTHERHOST. Use the AS/400 CHGDNSA command to change the contents of this file or use Operations Navigator DNS server configuration -> DNS server Properties -> General tab -> Automatically start server check box and Debug level. • The /TMP directory in the DNS directory is also automatically created when the DNS OS/400 option is installed. Do not delete this directory. It is used when a secondary name file attempts a zone transfer to this name server. The OTHERHOST's /QIBM/UserData/OS400/DNS/BOOT file is displayed in Figure 110. Figure 110. Otherhost’s BOOT File The OTHERHOST’s /QIBM/UserData/OS400/DNS/CACHE file is displayed in Figure 111. Figure 111. Otherhost’s CACHE File The OTHERHOST’s /QIBM/UserData/OS400/DNS/1.1.10.in-addr.arpa.DB file is displayed in Figure 112. Growing Your Domain: Creating Subdomains 121 Figure 112. Otherhost’s 1.1.10.in-addr.arpa.DB File The OTHERHOST’s /QIBM/UserData/OS400/DNS/otherdomain.mycompany.com.DB file is displayed in Figure 113. Figure 113. Otherhost’s Otherdomain.mycompany.com.DB File 5.8 Round Robin/Address Sorting So far, this chapter addressed DNS server configuration issues dealing with adding a subdomain and delegating authority to a child server. A customer’s network can grow in another direction also: adding IP addresses to existing hosts or adding an entire additional IP network. Notice that in Figure 112 and Figure 113, the SOA record in both the 1.1.10.in-addr.arpa and otherdomain.mycompany.com. includes the e-mail address of the DNS administrator responsible for these primary domain files: postmaster.otherhost.otherdomain.mycompany.com. This is the default. If this default is used, then the AS/400 otherhost should have a user profile and a POP3 directory entry of postmaster configured. TIP 122 AS/400 TCP/IP DNS and DHCP Support Let us look at an example of adding a second LAN adapter to host AS2, which is in the domain mycompany.com. This second LAN adapter is located on the 10.5.62.0 network. AS2 has a second IP address of 10.5.62.217. Figure 114 shows the network diagram including AS2 with two IP addresses. Figure 114. Host AS2 with Two IP Addresses AS2 needs to have its TCP/IP configuration updated and a line description created for this new LAN adapter. See the TCP/IP Configuration and Reference, SC41-5420-01, for details concerning this configuration. The name server that is authoritative for the domain mycompany.com, AS1, must be updated with the new second IP address for AS2. The steps to do this have already been covered in Chapter 5.5.5, “Configuring the Child Server Otherhost” on page 106. The two-IP-address-for- one-host configuration brings up a common question: If a host has two IP addresses, which one will the DNS server respond with when a client issues an A record query for this host? The answer is that the DNS server gives out both IP addresses in the A record query response. However, which IP address is listed first depends on a few things that we attempt to outline here. If a client is sending an A record query for AS2 to the AS1 name server, the response the client gets from AS1 depends on the client’s location in the network. The DNS server attempts to order AS2’s IP addresses with the first IP address closest to the client. This concept is called address sorting. For example, assume that a client located in the network 10.5.62.0 queries the AS1 name server for the IP addresses of AS2. The name server responds with both of AS2’s IP addresses but the 10.5.62.217 is listed first. Since clients Router 10.1.1.0 10.5.69.192 10.5.62.0 as2 as1 otherhost .222 .2 .211 .217 Pri Growing Your Domain: Creating Subdomains 123 typically attempt to use the IP address listed first in a DNS A record response, listing this address first is most efficient for the client. If another client sends the same query to the AS1 name server but the client resides in the 10.5.69.192 network, the name server sends the query response to the client with the 10.5.69.221 address listed first. But what if the client is not in one of these two networks? What if the client is located on the 10.1.1.0 network, which is remote to AS2? Again, the name server responds to a query listing both IP addresses but it alternates which IP address is listed first. Alternating IP addresses in response to A type queries is called round robin. We can use nslookup on the child server OTHERHOST to show how round robin works. Let’s query for the IP addresses of AS2. The reason we are using nslookup on OTHERHOST is because OTHERHOST is located in the 10.1.1.0 network, which is the one network AS2 is not connected to. Running nslookup on OTHERHOST allows us to use OTHERHOST as a client querying the AS1 name server (in other words, we are not using OTHERHOST’s name serving capabilities here, we are just using nslookup on OTHERHOST). When AS1 name server is queried, the name server sees the client making the query as having the IP address of 10.1.1.2. If we send multiple A record queries for host AS2, we should see an example of round robin. From the OTHERHOST AS400 command line, enter nslookup interactive mode by issuing the following command: call pgm(qdns/qtoblkup) Since we entered nslookup on the OTHERHOST AS/400 system, the preceding command defaults to the OTHERHOST DNS server as the server we are querying. We want to query the DNS server on AS1 so we change nslookup to use the DNS server AS1 by issuing the nslookup command: server 10.5.69.222 (10.5.69.222 is the IP address of AS1). The as2.mycompany.com command yields an answer with two IP addresses; 10.5.69.211 is given first, and 10.5.62.217 is given second. Issuing the command a second time produces an answer with the order of the two IP addresses reversed. Figure 115 on page 124 shows both queries and both responses. The DNS server AS1 alternates the order that the two IP addresses are given in the query response. This is because the query is coming from a source that is not located on the 10.5.69.192 network nor on the 10.5.62.0. The source, in this case, is the AS/400 OTHERHOST, which is located on the 10.1.1.0 network. 124 AS/400 TCP/IP DNS and DHCP Support Figure 115. =Nslookup Receiving Round Robined IP Addresses for As2 5.9 Summary In this chapter, we discussed adding a subdomain OTHERDOMAIN to the domain of mycompany.com. We discussed two methods of handling OTHERDOMAIN with respect to the DNS server configuration: Method 1 demonstrated keeping OTHERDOMAIN subdomain within mycompany.com’s zone of authority. This means that the DNS server primary for mycompany.com (AS1) is configured with the additional OTHERDOMAIN hosts. Therefore, Method 1 really is a continuation of the configuration methods discussed in Chapter 3, “Implementing Primary and Secondary DNS Servers” on page 25 except that the new hosts (from the OTHERDOMAIN subdomain) have longer domain names. Method 2 presented delegating the OTHERDOMAIN subdomain to a child DNS server. The parent DNS server AS1 is then configured with the hosts residing in mycompany.com but without the hosts residing in OTHERDOMAIN. The child DNS server OTHERHOST is then configured with hosts located in the OTHERDOMAIN subdomain. This chapter also explained verifying the DNS configuration with nslookup, how the mail configuration may change with Method 2, and the concept of round robin/address sorting. > > as2.mycompany.com. Server: [10.5.69.222] Address: 10.5.69.222 Name: as2.mycompany.com Addresses: 10.5.69.211, 10.5.62.217 > > as2.mycompany.com. Server: [10.5.69.222] Address: 10.5.69.222 Name: as2.mycompany.com Addresses: 10.5.62.217, 10.5.69.211 > © Copyright IBM Corp. 1998 125 Chapter 6. Split DNS: Hiding Your Internal DNS Behind a Firewall Now that you have finished the job of implementing DNS, you want to connect the network to the Internet! Your DNS databases contain information too valuable to be exposed to millions of potential hackers. This chapter explains how to configure your DNS to forward requests to the firewall name server when it cannot resolve names outside your company’s domain. We also explore mail exchange between your company’s internal mail servers and Internet mail servers. 6.1 Scenario 1: Configuring Your DNS to Forward Queries to a Firewall When connecting your internal network to the Internet, there are many resources that you should protect; your internal domain name server is one of them. Your DNS contains valuable company information that you do not want to expose to hackers. In the first scenario of this chapter, we discuss how to configure an internal DNS to forward queries to a firewall DNS to resolve external names. The internal name server and internal (secure) mail server run on the same AS/400 system where the Integrated PC Server running the IBM for AS/400 Firewall product is installed. The firewall DNS server has authority for the public server in the company’s public domain (mycompany.com) and receives all external access requests for the public server for host name resolution. The firewall DNS is also responsible for resolving Internet host names in response to queries from the internal DNS. When internal users want to browse an Internet Web site specifying its name in the URL, the internal client queries the internal DNS, and it, in turn, forwards the query to the firewall DNS. Note: The above statement is true for browser clients accessing the firewall through SOCKS and also for stand-alone client applications. For client browsers accessing the firewall through PROXY, the PROXY server in the firewall performs the name resolution, not the client. This way, you make your corporate domain name space invisible to the outside world. Figure 116 provides an overview of how name resolution queries flow in this environment. 1. The resolver in the PC1 workstation sends a query to the name server configured in its TCP/IP configuration (AS1 in Figure 116). 2. If AS1 DNS finds the host name locally, it sends back the response. If the query is for an external host, the forwarders directive in the AS1 name server tells it to forward the query to the firewall DNS. 3. If the query is for a host that is in the firewall DNS database (primary or cache), the firewall responds immediately. If not, it forwards the query to the ISP DNS. 4. The ISP DNS obtains the answer (or negative response, if the host is not found) and returns it to the firewall DNS. 5. The firewall DNS returns the answer to the internal DNS server in AS1. 6. The internal DNS server sends the answer back to the PC client. 126 AS/400 TCP/IP DNS and DHCP Support Figure 116. AS/400 as Internal Name Server and Secure Mail Server Behind and Internet Firewall 6.1.1 Scenario Objectives In this scenario, our objectives are to: 1. Show how to configure the internal DNS to forward queries for external hosts to the firewall name server. 2. Show the relationship between the Firewall for AS/400 configuration and the TCP/IP and DNS configurations on the AS/400 system. internet ISP DNS ASE.MS.COM Internal DNS & Secure Mail Server AS1 internet private.mycompany.com 1 2 3 4 5 6 Firewall with External DNS PC1 The flow diagram in Figure 116 is also valid for internal users querying the company’s public servers in front of the firewall. This is true as long as the company’s public domain name (mycompany.com in our scenario) and the company’s internal domain name (private.mycompany.com in our scenario) are not the same, as is the case in scenario 1. If the company’s internal and public domain names are the same (for example, mycompany.com for both internal and public), you must configure address records for the public hosts in front of the firewall in the internal DNS server for the internal name server to resolve the public hosts names. If you do not add A records for the public hosts in the internal DNS server configuration when an internal client queries, for example, WWW.mycompany.com, the query receives a negative response. The internal DNS server looks at its own data since it is authoritative for mycompany.com. If it does not find the WWW host in its own database, it does not forward the query to the firewall, but returns a negative response instead. Tip Split DNS: Hiding Your Internal DNS Behind a Firewall 127 3. Show how to change your current firewall configuration to take advantage of the internal DNS implementation of OS/400 V4R2 if your firewall is currently running without internal DNS. 4. Provide an overview of the AS/400 TCP/IP configuration, Firewall for AS/400 configuration, AS/400 DNS server configuration, AS/400 SMTP, and POP server configuration to help you get started in a similar environment. 6.1.2 Scenario Advantages The main advantages of this scenario are that: • It shows how easy it is to safely make the Internet name space available to your existing network by configuring your internal DNS to forward off-site queries to the DNS running in the firewall. • It shows how a single AS/400 system can provide DNS services to the secure network, house the Integrated PC Server where the firewall runs, be the secure mail server, and at the same time, be a reliable application server. 6.1.3 Scenario Disadvantages This scenario is simple and applies mainly to small networks. We discuss more complex environments in later scenarios. 6.1.4 Scenario Network Configuration Figure 117 shows the testing environment that we used for this scenario. Figure 117. Scenario 1 - Network Topology Router ISP DNS Router internet ms.com mycompany.com ms.com DSN WWW WWW private.mycompany.com H G B C A D Z Y V X T S W E F Router I J 10.5.62.0 .200 .1 10.5.69.0 .1 .211 192.168.7.0 .1 .2 .208 .11 .2 8.9.10.0 .1 DNS Server IPCS Firewall as1.private .mycompany.com 128 AS/400 TCP/IP DNS and DHCP Support The main characteristics of these scenarios are: • The name server running on the AS/400 system (as1.private.mycompany.com) provides name resolution services for hosts that are in the internal (secure) domain (private.mycompany.com). It provides authoritative name resolution for names in the internal domain, including the host name of the firewall on the secure interface . The forwarders list is used for name resolution for information not in the authoritative data or cache. • The firewall name server is responsible for resolving external (Internet) host names in response to requests from the internal name server. • The internal name server must be configured to forward queries to the firewall DNS. • The firewall name server contains only names that are visible from the Internet such as the public Web server, WWW.mycompany.com. The firewall DNS has authority for the public domain mycompany.com. • All inbound mail sent from the Internet to users in mycompany.com is forwarded by the firewall mail relay function to the secure mail server specified during the firewall configuration. In this scenario, the secure mail server runs on the AS/400 system where the firewall is installed (as1.mycompany.com). 6.2 Task Summary To implement this scenario, you need to perform the following tasks: 1. Verify the AS/400 TCP/IP configuration. 2. Verify the AS/400 mail configuration. 3. Verify the firewall configuration. 4. Change the internal DNS configuration to forward queries for external hosts to the firewall DNS. 5. Verify the clients configuration. 6.2.1 Verify the AS/400 TCP/IP Configuration on AS1 The following checklist shows the TCP/IP configuration options you need to verify. We assume that they are already configured in your environment. There are three separate name servers in this scenario: • Internal name server: DNS server responsible for the company’s private name space. It provides name services to hosts in the secure network. In this scenario, this DNS server is authoritative for private.mycompany.com and runs on as1.private.mycompany.com. • Firewall name server: DNS server responsible for the company’s public name space. It is authoritative for mycompany.com, and runs on the firewall. • External name server: we also call this name server the Sips DNS server. It is the first name server in the Internet the firewall DNS server queries for names outside the company’s domain. Terminology Split DNS: Hiding Your Internal DNS Behind a Firewall 129 6.2.1.1 Verify the TCP/IP Interface Configuration To check the configuration of the TCP/IP interface, do the following steps: 1. On an AS/400 command line, type: CFGTCP Press ENTER to display the Configure TCP (CFGTCP) menu. 2. Select option 1 (Work with TCP/IP interfaces) to see the Work with TCP/IP Interfaces display (Figure 118). 3. Locate your AS/400 LAN adapter (labeled G in Figure 117 on page 127). The LAN adapter is listed under the Line Description column. Figure 118. Work with TCP/IP Interfaces Display 4. Press F11 to view the status for the LAN adapter and verify that the status is active. After you verify that the LAN adapter is active, you must verify that the AS/400 system host and secure domain names are configured. 6.2.1.2 Verifying the AS/400 System Host and Secure Domain Names Before you install the firewall, ensure that you have configured a host and secure domain name for the home AS/400 system. To verify that the AS/400 system has a host and secure domain name, do the following steps: 1. On an AS/400 command line, type: CFGTCP Press ENTER to display the Configure TCP menu. 2. Select option 12 (Change TCP/IP domain) to see the Change TCP/IP Domain display (Figure 119). 3. Verify that the Local domain name, Local host name, and name server Internet address fields have the correct values for the secure network. Work with TCP/IP Interfaces System:AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type 10.5.69.211 255.255.255.192 AS1LAN *TRLAN If the TCP/IP interface for the LAN adapter is inactive, you must start the interface by using option 9 on the Work with TCP/IP Interfaces display (Figure 118 on page 129). Then, press F5 to refresh the display and verify that the interface has started. Note 130 AS/400 TCP/IP DNS and DHCP Support Figure 119. Change Local Domain and Host Names and Name Server IP Address Display Note: If you are using the host table in the AS/400 system to resolve any host name to complement the internal DNS, the Host name search priority must be *LOCAL. Specifying *LOCAL in this parameter causes the host table to be searched first, and then the internal DNS server is queried. 6.2.2 Verify the AS/400 Mail Configuration The following checklist shows mail-related configuration options you need to verify. We assume they are already configured in your environment. To route mail for Internet users to the firewall, you must configure the SMTP attributes in the AS/400 system to point to the firewall as the mail router. Enter the name of the firewall in the Mail router field. This tells the STMP server where to forward mail that it cannot deliver itself. You must enter *YES in the Firewall field. This tells the STMP server that it is located behind a firewall. On an AS/400 command line, type: CHGSMTPA Press F4. Enter the correct values as shown in Figure 120 and press Enter. Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . as1 Domain name . . . . . . . . . . private.mycompany.com Host name search priority . . . *REMOTE *REMOTE, *LOCAL, *SAME Internet address . . . . . . . 10.5.69.211 In this scenario, we assume that the secure network’s DNS server runs on the same AS/400 system where the firewall Integrated PC Server is installed. We call this name server the internal name server or internal DNS. Note Split DNS: Hiding Your Internal DNS Behind a Firewall 131 Figure 120. Simple Mail Transfer Protocol Attributes Start the SMTP server: STRTCPSVR SERVER(*SMTP) 6.2.2.1 Add Mail Users to the System Distribution Directory Add an entry in the system distribution directory for each mail user. Use the Work with Directory Entry (WRKDIRE) command and option 1, Add. Alternatively, you can use the Add Directory Entry (ADDDIRE) command. The following displays show only the relevant parameters (use option 2, Change of WRKDIRE, only to display the parameters you want to see). Figure 121. Directory Entry for Pop User - General Information To get to the next display, page down four times. Change SMTP Attributes (CHGSMTPA) Type choices, press Enter. Mail router . . . . . . . . . . 'firewall.private.company.com' Coded character set identifier 00819 1-65533, *SAME, *DFT Mapping tables: Outgoing EBCDIC/ASCII table . *CCSID Name, *SAME, *CCSID, *DFT Library . . . . . . . . . . Name, *LIBL, *CURLIB Incoming ASCII/EBCDIC table . *CCSID Name, *SAME, *CCSID, *DFT Library . . . . . . . . . . Name, *LIBL, *CURLIB Firewall . . . . . . . . . . . . *YES *YES, *NO, *SAME Change Directory Entry User ID/Address . . . . : USER1 AS1 Type changes, press Enter. Description . . . . . . Pop user System name/Group . . . AS1 F4 for list User profile . . . . . USER1 F4 for list Network user ID . . . . USER1 AS1 More... 132 AS/400 TCP/IP DNS and DHCP Support Figure 122. Mail Service Level = System Message Storage - Preferred Address = SMTP Name Press F19 to configure the SMTP name for the user. Figure 123. User’s SMTP Name 4. Start the POP3 Server and Mail Server Framework: STRTCPSVR SERVER(*POP) STRMSF Change Directory Entry User ID/Address . . . . : USER1 AS1 Type changes, press Enter. Mail service level . . 2 1=User index 2=System message store 4=Lotus Domino 9=Other mail service For choice 9=Other mail service: Field name . . . . F4 for list Preferred address . . . 3 1=User ID/Address 2=O/R name 3=SMTP name 9=Other preferred address Address type . . . . F4 for list For choice 9=Other preferred address: Field name . . . . F4 for list More... Change Name for SMTP System: AS1 User ID/Address . . . . . : USER1 AS1 Type choices, press Enter. SMTP user ID . . . . . . user1 SMTP domain . . . . . . . as1.private.mycompany.com SMTP route . . . . . . . Split DNS: Hiding Your Internal DNS Behind a Firewall 133 6.2.3 Firewall Installation and Configuration Table 1 on page 133 provides a summary of the values used during the firewall installation in our test environment. At the end of the installation, a summary of the information that you provided is shown in the Complete the Firewall Installation page (Figure 124). Review the information; then click the Install button to finish. Table 1. Firewall Installation Wordiest Installation Integrated PC Server - if you have more than one Integrated PC Server, you need to know which one is the one where you want to install the firewall (for example, CC01). You can use the WRKHDWRSC command to find this information. CC07 Firewall Name - create a new unique name for your firewall. This name is also used to create a network server description object (for example, FRW01). firewall Port 1 Port 2 Type of LAN - Ethernet, 4 Mbps token-ring, or 16 Mbps token-ring. 16M, TRN 16M, TRN Adapter Address - create a new unique address for each port. This address must not already be used on your LAN (for example, 400000000000 or 020000000000). 400000000001 400000000002 Port IP address * (for example, 10.1.2.3) 10.5.69.208 8.9.10.11 Port Subnet Mask * (for example, 255.255.255.0) 255.255.255.192 255.255.255.0 IP address of your router * (for example, 10.2.3.1) 8.9.10.1 * If you are connecting to the Internet, you may need to consult with your Internet service provider for this value. 134 AS/400 TCP/IP DNS and DHCP Support Figure 124. Firewall Installation Summary Page Table 2 provides a summary of the values used during the firewall configuration in our test environment. Table 2. Configuration Worksheet Configuration Secure Mail Server Name - if you have a secure mail server, enter the name here. For example, if the mail server’s host name is mailsvr and it is part of the domain mynetwork.mycompany.com, then enter: mailsvr.mynetwork.mycompany.com as1.private.mycompany.com Secure Port - if your Integrated PC Server has two ports, you need to know which one is attached to your secure port. port 1 Non-Secure Domain Name * - this is the domain that is outside of the firewall and accessible by outsiders. If your secure domain name is mynetwork.mycompany.com, you probably should name your non-secure domain mycompany.com. mycompany.com Non-Secure Domain Name Server IP Addresses * (for example, 208.222.150.7) 7.10.10.240 Non-Secure Hosts * - list the names and IP addresses of up to four non-secure hosts. These are systems that are placed outside of the firewall. For example, you may want to place a WWW server machine outside of the firewall. WWW - 8.9.10.2 Proxy Server - decide which services you want to configure. HTTP,HTTPS Split DNS: Hiding Your Internal DNS Behind a Firewall 135 At the end of the configuration, a summary of the information that you provided is shown in the Review Configuration page (Figure 125, Figure 126, and Figure 127 on page 136). Review the information; then click on OK to finish. Figure 125. Firewall Review Configuration (1 of 3) SOCKS Server - decide which services you want to configure. HTTP, HTTPS * If you are connecting to the Internet, you may need to consult with your Internet service provider for this value. Table 2. Configuration Worksheet Configuration 136 AS/400 TCP/IP DNS and DHCP Support Figure 126. Firewall Review Configuration (2 of 3) Figure 127. Firewall Review Configuration (3 of 3) After you install and configure the firewall, the network server description that contains the firewall configuration points to the name server configured in the AS/400 (using the CHGTCPDMN command). This is the internal DNS server. The firewall as a TCP/IP host belongs to your internal network (domain private.mycompany.com). Figure 128 shows the internal and external name servers configured in the firewall. The internal DNS server IP address matches the name server Internet address in the AS/400 system where the firewall is installed. The external DNS server is usually the ISP DNS server IP address specified during the firewall configuration. Split DNS: Hiding Your Internal DNS Behind a Firewall 137 Figure 128. Firewall DNS Server Configuration When the proxy server in the firewall receives a URL from a browser, it queries the internal DNS server to resolve the name. Usually, it is an Internet host not known by the internal name server. The internal DNS server is configured to forward queries to the firewall DNS server that it cannot resolve. At that point, the firewall DNS queries the ISP DNS. When inbound mail for users in the mycompany.com domain reaches the firewall mail relay, the resolver queries the internal DNS server, on behalf of SENDMAIL (the mail relay program in the firewall) to resolve the IP address for the secure mail server specified in the firewall configuration. 6.2.3.1 Firewall DNS Filters The firewall basic configuration adds filters to prevent direct queries and responses to and from the internal DNS and the Internet DNS. All queries and responses must go through the DNS in the firewall (routing is local). Figure 129 shows the DNS filters created by basic configuration in the firewall. D isplay Network Server Desc AS1 02/09/98 09:05:29 Network server description . . . . : FIR EW ALL Option . . . . . . . . . . . . . . : *TC PIP TC P/IP local host name . . . . . . : *NW SD TC P/IP local dom ain name . . . . . : *SYS TC P/IP name server system . . . . : *SYS Change TCP/IP Domain (CH GTCPDM N) Type choices, press Enter. Host name . . . . . . . . . . . 'as1' Domain name . . . . . . . . . . 'private.m ycompany.com' Host name search priority . . . *REM OTE *REMOTE, *LOCAL, *SAM E Internet address . . . . . . . '10.5.69.211' Firewall configuration Network Server Description AS/400 TCP/IP Configuration ISP DNS 10.5.69.211 7 10 10 240 138 AS/400 TCP/IP DNS and DHCP Support Figure 129. Firewall DNS Filters DNS queries and responses are most often contained within UDP packets. Zone transfers are over TCP. Notice that the filters allowed both protocols. 6.2.4 Updating the Firewall Configuration to Use the Internal DNS If you already have IBM Firewall for AS/400 configured to work with no internal DNS, you can now change the configuration to take advantage of the V4R2 DNS support. If this is the case, you probably have configured the firewall as explained in Chapter 4 of the ITSO redbook, AS/400 Internet Security: IBM Firewall for AS/400 , SG24-2162. You need to change: • The TCP/IP name server parameter in the firewall network server description to point to the internal DNS. Use the following steps: 1. Verify the domain name server Internet address configured in the AS/400 system where the firewall Integrated PC Serveris installed. See Figure 119 on page 130. 2. End the firewall application: ENDNWSAPP NWSAPP(*FIREWALL) NWS(FIREWALL) ############################################################### ### Both-side settings ############################################################### permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 udp eq 53 eq 53 both local both f=y l=n t=0 # Permit servers to query & reply to each other. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 udp eq 53 ge 1024 both local both f=y l=n t=0 # Permit nameserver to reply to clients. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 udp ge 1024 eq 53 both local both f=y l=n t=0 # Permit clients to query nameserver. ############################################################### ### Non-Secure side settings ############################################################### permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp eq 53 eq 53 non-secure local both f=y l=n t=0 # Permit external & firewall dns to query & reply to each other. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp/ack eq 53 eq 53 non-secure local both f=y l=n t=0 # Permit reply. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp ge 1024 eq 53 non-secure local inbound f=y l=n t=0 # Permit external client queries to firewall dns. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp/ack eq 53 ge 1024 non-secure local outbound f=y l=n t=0 # Permit reply. ############################################################### ### Secure side settings ############################################################### permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp eq 53 eq 53 secure local inbound f=y l=n t=0 # Permit internal dns to query firewall dns. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp/ack eq 53 eq 53 secure local outbound f=y l=n t=0 # Permit reply. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp ge 1024 eq 53 secure local both f=y l=n t=0 # Permit internal client queries to firewall dns. permit 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 tcp/ack eq 53 ge 1024 secure local both f=y l=n t=0 # Permit reply. Split DNS: Hiding Your Internal DNS Behind a Firewall 139 3. Vary off the firewall network server description: VRYCFG CFGOBJ(FIREWALL) CFGTYPE(*NWS) STATUS(*OFF) 4. Change the firewall network server description to reset the TCP/IP name server system parameter to *SYS: CHGNWSD NWSD(FIREWALL) TCPNAMSVR(*SYS) 5. Vary on the firewall network server description: VRYCFG CFGOBJ(FIREWALL) CFGTYPE(*NWS) STATUS(*ON This updates the firewall configuration to take the new value for the name server. 6. Start the firewall application: STRNWSAPP NWSAPP(*FIREWALL) NWS(FIREWALL) • The firewall DNS configuration. Before V4R2, you did not have a DNS server in the secure network that the mail relay function in the firewall could query to find the secure mail server to deliver inbound mail. Therefore, you had to configure the secure mail server in the firewall DNS so that it could resolve the IP address of the secure mail server. To do that, you configured the firewall DNS using the Advanced Domain Name Server settings. Now (V4R2) that you have an internal DNS server, you can delete those changes to use the internal DNS server to locate the secure mail server. Complete the following steps: 1. Go to firewall Configuration. 2. Click on DNS/Mail. 3. Verify the values for the Secure Domain Name Server and Secure Mail Server. Click on OK and click on Done to quit. This removes the changes that you made using the Advanced Domain Name Server configuration option of the firewall. 4. Go to firewall Administration. 5. At the Administration menu, click on Status. 6. Restart the DNS and Mail firewall functions shown in Figure 130 and click on OK. 140 AS/400 TCP/IP DNS and DHCP Support Figure 130. Restarting DNS and MAil Functions in the Firewall 6.2.5 Configuring Forwarders in the Internal DNS If you designate the firewall name server in your internal DNS as forwarders, all off-site queries are sent to the forwarders. The DNS in the firewall builds a rich cache of information. For a given query in a remote domain, there is a probability that the firewall DNS can answer the query from its cache. One advantage of using only forwarders for off-site queries is having the large cache of the forwarder server available to all the systems using it. To configure the forwarders directive to send unresolved queries to the firewall DNS, use the following steps: 1. Go to the DNS configuration for as1.private.mycompany.com through Operations Navigator. 2. Right-click on DNS Server - as1.private.mycompany.com and select Properties. 3. Click on the Forwarders tab. 4. Click on Add to add the IP address of the firewall secure port shown in Figure 131. Split DNS: Hiding Your Internal DNS Behind a Firewall 141 Figure 131. Adding the Firewall Secure Port IP Address to the Forwarders List 5. Click on Contact only forwarders for off-site queries. This field specifies whether you want to use the DNS server as a slave server to the forwarder servers. This means that, if the DNS server cannot respond to a query for an address based on its authoritative data or its cache, you want the DNS server to forward queries based only on your list of forwarder servers. The DNS server does not forward queries to other domain servers or root servers. The DNS server forwards queries to only those in the Forwarder IP address list shown in Figure 131. 6. Click on OK and close the DNS server configuration. For completeness, we include the configuration of the DNS server running in AS1 during our tests. Figure 132. DNS Server Configuration - as1.private.mycompany.com 142 AS/400 TCP/IP DNS and DHCP Support Figure 133. Mail Exchanger Configuration 6.2.6 Client Configuration The clients used in this scenario must have the internal DNS server specified in their DNS server configuration for name resolution. Figure 134 shows the DNS configuration for a Windows 95 client (PC1 in our scenario). Figure 134. DNS Server Configuration in Windows 95 Split DNS: Hiding Your Internal DNS Behind a Firewall 143 The browser proxy and SOCKS configuration must point to the firewall secure port as shown in Figure 135. Figure 135. Netscape Browser Proxy and SOCKS Configuration The POP client must point to the secure SMTP mail server for outgoing mail and POP3 server for incoming mail. Figure 136 and Figure 137 show the Netscape browser mail preferences used in our scenario. Figure 136. POP3 Client Mail Servers Configuration Note: The POP3 User Name must match the user ID specified in Figure 121 on page 131. 144 AS/400 TCP/IP DNS and DHCP Support Figure 137. POP3 Client Identity Configuration 6.3 Sharing a LAN Adapter Between the AS/400 and Integrated PC Server The Integrated PC Server (IPCS) requires two LAN connections for firewall functions. One LAN adapter is connected to the internal secure network and the other to the unsecure network (for example, the Internet). Although we recommend that the AS/400 system on which the firewall is installed have a LAN adapter of its own for connection to the internal secure network, this is not possible on all AS/400 models. Fortunately, the Integrated PC Server (IPCS) provides the ability to share its LAN adapters with the AS/400 system on which it is installed. Only the LAN adapter connected to the internal (secure) network should be shared. The LAN adapter connected to the unsecure network should not be shared because it can bypass firewall functions. In this section, we explain how to implement Scenario 1 in this chapter when the AS/400 system and the Integrated PC Server must share the same LAN adapter. For complete configuration information of the AS/400 system and firewall in this situation, refer to the AS/400 firewall home page (http://www.as400.ibm.com/tstudio/firewall/fwindex.htm —>Resources —>Tech Tips) or the redbook AS/400 Internet Security: IBM Firewall for AS/400 , SG24-2162. Communication between the firewall application running on the Integrated PC Server and applications running on the AS/400 system that houses the Integrated PC Server, can only flow between the *INTERNAL ports. In other words, both hosts (the AS/400 and the Integrated PC Server) cannot talk to each other through the IP interfaces configured over the shared LAN adapter. Note Split DNS: Hiding Your Internal DNS Behind a Firewall 145 When configuring the AS/400 system and the firewall in this situation, you must keep in mind that all communication between both hosts must flow through the *INTERNAL ports. Figure 138 shows that the AS/400 system, which houses the Integrated PC Server where the firewall is installed, and the Integrated PC Server share the same LAN adapter. The AS/400 interface 10.5.69.211 (labeled G in Figure 138) and the Integrated PC Server secure port IP interface 10.5.69.208 (labeled B in Figure 138) are configured over the same LAN adapter, which is the secure port of the firewall. Figure 138. AS1 and Firewall Sharing LAN Adapter 6.3.1 AS/400 System TCP/IP Configuration The following sections summarize the TCP/IP configuration in the AS/400 system that houses the firewall Integrated PC Server. 6.3.1.1 TCP/IP Interface Configuration Configure the AS/400 system IP interface for communication with the internal or secure network on the same line description as the one used by the firewall secure port. This configuration shows how both hosts share the same LAN adapter. Figure 139 shows the results of CFGTCP option 1, Work with TCP/IP interfaces. Router ISP DNS Router internet ms.com mycompany.com ms.com DSN WWW WWW private.mycompany.com H G B C A D Z Y V X T S W E F Router I J 10.5.62.0 .200 .1 10.5.69.0 .1 .211 192.168.7.0 .1 .2 .208 .11 .2 8.9.10.0 .1 DNS Server IPCS Firewall as1.private .mycompany.com 146 AS/400 TCP/IP DNS and DHCP Support Figure 139. AS/400 External IP Interface Configured Over FIREWALL01 Line Description 6.3.1.2 AS/400 System Host and Secure Domain Names The internal DNS server runs on the AS/400 system in this scenario. When the firewall resolver queries the internal DNS (for example, to locate the MX record for the secure mail server), it should use the AS/400 system *INTERNAL port IP address, 192.168.7.1 in this scenario. The AS/400 resolver can also use the *INTERNAL port IP address to query the internal DNS server. Configure the AS/400 system *INTERNAL port IP address in the Internet address field of the Change TCP/IP Domain (CHGTCPDMN) command. The firewall installation program uses this value by default as the internal DNS server IP address when it creates the firewall network server description (NWSD). Configure the Host name search priority field in the CHGTCPDMN command as *LOCAL. Later, you will configure a TCP/IP host table enrty on the AS/400 system with the firewall name and *INTERNAL port IP address. Search priority *LOCAL causes SMTP to find this host table entry. Figure 140 shows the configuration values in the CHGTCPDMN command (or CFGTCP option 12). Figure 140. Internal DNS IP address is AS/400 System’s *INTERNAL Port - Search Priority *LOCAL 6.3.1.3 AS/400 System TCP/IP Host Table Entries For the AS/400 system to resolve the mail router name (firewall.private.mycompany.com) to the firewall *INTERNAL port IP address, you must configure an entry for the firewall on the AS/400 TCP/IP host table. Figure 141 shows the TCP/IP host table configuration (CFGTCP option 10). Work with TCP/IP Interfaces System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type 10.5.69.211 255.255.255.0 FIREWALL01 *TRLAN 192.168.7.1 255.255.255.0 FIREWALL00 *TRLAN Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . as1 Domain name . . . . . . . . . . private.mycompany.com Host name search priority . . . *LOCAL *REMOTE, *LOCAL, *SAME Internet address . . . . . . . 192.168.7.1 Split DNS: Hiding Your Internal DNS Behind a Firewall 147 Figure 141. Firewall Configuration on AS/400 TCP/IP Host Table 6.3.1.4 AS/400 System SMTP Attributes Configuration The SMTP attributes configuration is the same as in the situation where the LAN adapter is not shared by the AS/400 system and the firewall Integrated PC Server. Figure 120 on page 131 shows the SMTP attributes configuration on the AS/400 system. 6.3.2 Firewall Configuration The procedure to install and configure the firewall is the same as the one described in Section 6.2.3, “Firewall Installation and Configuration” on page 133. The only difference is that, when the firewall and the AS/400 system share the secure port’s LAN adapter, the Secure Domain Name Server in the firewall configuration must be the AS/400 *INTERNAL port IP address. The installation program uses the value specified in the domain name server configuration of the AS/400 system, as we explained in Section 6.3.1.2, “AS/400 System Host and Secure Domain Names” on page 146. Figure 142 shows the firewall DNS/Mail settings in this environment. Work with TCP/IP Host Table Entries System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 7=Rename Internet Host Opt Address Name 192.168.7.2 FIREWALL FIREWALL.PRIVATE.MYCOMPANY.COM 148 AS/400 TCP/IP DNS and DHCP Support Figure 142. Firewall DNS/Mail Settings - Secure DNS Server is AS/400 *INTERNAL Port IP Address 6.3.3 Internal DNS Server Configuration The DNS server configuration in this environment must include: • A forwarder directive pointing to the firewall *INTERNAL port IP address (E in Figure 138 on page 145). Remember that the DNS server application running on the same AS/400 system where the firewall is installed and the firewall Integrated PC Server can only communicate through the *INTERNAL ports. See Figure 143. Figure 143. Adding the Firewall *INTERNAL Port IP Address to the Forwarders List • Two A (address) records for the AS/400 system. One A record has the IP address of the AS/400 system external interface configured over the shared Split DNS: Hiding Your Internal DNS Behind a Firewall 149 LAN (G in Figure 138 on page 145), for communication with hosts in the secure network. The other A record has the IP address of the AS/400 system *INTERNAL port (F in Figure 138 on page 145) for communication with the firewall. See Figure 144. Figure 144. Configuring AS1 External and *INTERNAL Ports IP Addresses • Two A (address) records for the firewall. One A record has the IP address of the firewall secure port (B in Figure 138 on page 145) for communication with hosts in the secure network. The other A record has the IP address of the firewall *INTERNAL port (E in Figure 138 on page 145) for communication with the AS/400 system. See Figure 145. Figure 145. Configuring Firewall External and *INTERNAL Ports IP Addresses 150 AS/400 TCP/IP DNS and DHCP Support In this environment, the DNS server running on as1.private.mycompany.com is also primary for the reverse mapping 7.168.192.in-addr.arpa. domain. Figure 146. Internal DNS Server Configuration in AS1 There must be an MX record for the secure mail server configured in the firewall. Figure 147 shows the mail exchanger configuration. When a host in the secure network (IP address 10.5.0.0) queries the internal DNS server for the firewall’s IP address, the query comes over the external IP interface, and the DNS server returns the closer IP address to that host, 10.5.69.208. When the firewall queries the internal DNS server for AS1’s IP address, the query comes through the *INTERNAL port and the DNS returns the IP address of the AS/400 *INTERNAL port. Tip Split DNS: Hiding Your Internal DNS Behind a Firewall 151 Figure 147. Mail Exchanger Configuration - Secure Mail Server Figure 148 shows the content of the forward mapping file for the private.mycompany.com domain. Figure 148. Content of private.mycompany.com.DB file Figure 148 shows the content of the boot file for the AS1 DNS server. 152 AS/400 TCP/IP DNS and DHCP Support Figure 149. Boot File in AS1 DNS Server 153 6.4 Scenario 2: Multiple Mail Servers Behind the Firewall In this scenario, we are building on what we discussed in “Scenario 1: Configuring Your DNS to Forward Queries to a Firewall” on page 125. The private network now has three mail servers: ASM, AS1, and AS2, which is also the secure mail server. All inbound mail from the Internet is relayed by the firewall to the secure mail server (AS2). The forwarding function in AS2 forwards the mail for the users to the corresponding internal mail server (AS1, ASM, or delivers it locally for AS2 users). For an overview of the mail concepts that you need as background to this scenario, refer to Appendix A.1, “Basic Mail Configuration” on page 431). Internal mail for internal users is delivered to the corresponding internal mail server. Outbound mail sent from the internal users to Internet users is relayed by the firewall to the corresponding Internet mail server. Figure 150 provides an overview of how outbound mail is forwarded from the internal mail servers to the firewall configured as mail router in each system in the secure network. The figure shows how inbound mail received by the firewall mail relay function is passed to the secure mail server (AS2) and forwarded to the internal mail servers based on the mail recipient. Figure 150. Multiple Internal Mail Servers Behind the Internet Firewall 6.4.1 Scenario Objectives In this scenario, our objectives are to: 1. Show how to configure the internal DNS to route internal mail to the appropriate mail exchanger in the secure network (AS1, AS2, or ASM). Firewall mycompany.com Router WWW Public Server Internet Secure Mail Server Firewall Mail Relay Internal Mail Server Internal Mail Server mail router mail router DNS Server (secure network) mycompany.com (public network) ASM AS1 AS2 inbound mai © Copyright IBM Corp. 1998 154 2. Show how to implement the forwarding function in the secure mail server (AS2) so that inbound mail from the Internet is delivered to the appropriate internal mail server based on the recipient’s user ID. 6.4.2 Scenario Network Configuration Figure 151 shows the testing environment that we used for this scenario. Figure 151. Scenario 2 - Network Topology The main characteristics of this scenario are: 1. The firewall relays all inbound mail from the Internet destined to mycompany.com users to the secure mail server as2.mycompany.com. 2. For inbound mail, the firewall changes the recipient’s domain user@mycompany.com to user@as2.mycompany.com. For outbound mail, the firewall changes the originator’s name from user@asx.mycompany.com to user@mycompany.com. Note: ASx represents the originator’s mail server’s system name. 3. The secure domain name is the same as the public domain name (mycompany.com). 4. All internal mail servers route mail for Internet domains to the firewall. 6.4.3 Scenario Advantages The advantages of this scenario are that: • All mail servers in the internal network are protected by a single firewall. • Internet inbound mail for all internal users (regardless of the internal mail server they are on) is addressed to the user@public_domain; the firewall forwards all inbound mail to the secure mail server. mycompany.com ISP DNS internet Router ms.com ms.com DSN WWW Router 10.5.62.0 .200 .1 .1 Router G B C D E F 192.168.7.0 .3 .4 .208 .11 8.9.10.0 OS/400 TCP/IP IPCS Firewall Network configuratiom Mail Server AS1 WWW AS2 H Mail Server DNS A 10.5.69.0 .201 .222 .212 .211 PC1 PC2 I .2 .1 155 • Using the mail forwarding function in the secure mail server, we forward the mail to the user at the appropriate internal mail server. 6.4.4 Scenario Disadvantages The main disadvantages of this scenario are: • To forward mail to users at internal mail servers using the mail forwarding function, you must configure a system distribution directory entry for every user in the secure mail server. • The firewall does not hide all of the internal network information (mail server host name and domain name) for users in the CC: list. This causes problems if the recipient of the mail in the Internet uses the Reply All function to respond. We show a circumvention to this problem in Section 6.5.5, “Considerations for Exchanging Mail with Internet Users” on page 167. 6.5 Task Summary To implement this scenario, you need to perform the following tasks: 1. Verify the AS/400 TCP/IP configuration. 2. Verify the AS/400 mail configuration. Mail for domains other than mycompany.com must be routed to the firewall mail relay. 3. Verify the firewall configuration. 4. Configure the internal DNS server to forward mail to the appropriate internal mail server based on the recipient’s domain. 5. Configure the internal DNS to forward name resolution for hosts outside mycompany.com to the firewall DNS. 6.5.1 Verify the AS/400 TCP/IP Configuration In this section, we merely summarize the TCP/IP configuration used in our test environment. Table 3 summarizes the TCP/IP configuration values used in our test network. Table 3. Scenario 2 - TCP/IP Configuration Summary Note: The letters in bold between brackets refer to the ports shown in Figure 151 on page 154. TCP/IP Configuration AS1 AS2 ASM IP address (CFGTCP op. 1) 10.5.69.222 (A) 10.5.69.211 (G) 10.5.69.212 (H) Host Name (CFGTCP op. 12) AS1 AS2 ASM Domain Name (CFGTCP op. 12) mycompany.com mycompany.com mycompany.com Host name search priority (CFGTCP op. 12) *REMOTE *REMOTE *REMOTE DNS Internet address (CFGTCP op. 12) 10.5.69.222 (A) 10.5.69.222 (A) 10.5.69.222 (A) © Copyright IBM Corp. 1998 156 6.5.2 Verify the AS/400 Mail Configuration This section provides a summary of the mail configuration required in each internal mail server. Refer to Appendix A.1, “Basic Mail Configuration” on page 431 for background information. Table 4 shows a summary of the mail configuration used in our test network. Table 4. Scenario 2 - Mail Configuration Summary 6.5.2.1 Implementing Mail Forwarding in the Secure Mail Server As explained in Section 6.3.2, “Scenario Network Configuration” on page 154, the firewall relays all inbound mail from the Internet destined to mycompany.com’s users to the secure mail server as2.mycompany.com. For inbound mail, the firewall changes the recipient’s domain user@mycompany.com to user@as2.mycompany.com. For outbound mail, the firewall changes the originator’s name from user@asx.mycompany.com to user@mycompany.com, where asx is the internal mail server for the user. The secure mail server (AS2 in our scenario) acts as a mail hub receiving all inbound mail from the firewall. We need to implement a mail forwarding function on AS2 to forward mail to the corresponding internal mail server based on the recipient’s User ID. Refer to Appendix A.2, “Mail Forwarding” on page 433 for a general description of the mail forwarding function. Figure 152 shows the system distribution directory entries on each mail server. There must be a system distribution directory entry for every user in the secure mail server (AS2). The entries for non-local users (users on ASM and AS1) must include the user-defined field forwarding pointing to the corresponding local user and internal mail server. Mail Configuration AS1 AS2 ASM Mail router (CHGSMTPA) firewall.mycompany.com firewall.mycompany.com firewall.mycompany.com Firewall (CHGSMTPA) *YES *YES *YES UserID/Address (ADDDIRE) AS1USR/AS1 AS2USR/AS2 ASMUSR/ASM System name / Group (ADDDIRE) AS1 AS2 ASM User profile (ADDDIRE) AS1USR AS2USR ASMUSR Mail service level (WRKDIRE) 2 - System message store 2 - System message store 2 - System message store Preferred address (WRKDIRE) 3 - SMTP name 3 - SMTP name 3 - SMTP name SMTPAUSRID (WRKDIRE + F19) as1usr as2usr asmusr SMTPDMN (WRKDIRE + F19) as1.mycompany.com as2.mycompany.com asm.mycompany.com 157 Figure 152. Mail Forwarding Using User-Defined Fields in the System Distribution Directory To summarize, to implement the mail forwarding function in the mail hub (secure mail server in our scenario), you must: 1. Add two user-defined fields to the system distribution directory on AS2: 1. Create two user-defined fields in the system distribution directory using the Change System Directory Attributes (CHGSYSDIRA) command. 2. Enter the CHGSYSDIRA command and press F4. 3. Page down until the user-defined field parameters are displayed. 4. Fill in the information as shown in Figure 153. Internal Mail Server2 Internal Mail Server3 UserID System SMTPname SMTPdomain Forwarding AS2USR AS2 as2usr as2.mycompany.com AS1USR INTERNET as1usr as2.mycompany.com as1usr@as1.mycompany.com ASMUSR INTERNET asmusr as2.mycompany.com asmusr@asm.mycompany.com UserID System SMTPname SMTPdomain Forwarding asmusr ASM asmusr asm.mycompany.com Internal Mail Server1 DNS Firewall Mail Relay SDD SDD AS2 ASM AS1 UserID System SMTPname SMTPdomain Forwarding AS1USR AS1 as1usr as1.mycompany.com SDD asmusr@mycompany.com asmusr@as2.mycompany.com © Copyright IBM Corp. 1998 158 Figure 153. Adding User-Defined Fields to the System Distribution Directory 2. Add a directory entry for each user in the network on AS2 to forward mail to the user’s mail server: 1. From an AS/400 command entry display, enter the command: WRKDIRE Press Enter. 2. Select option 1, Add. 3. Enter the following information. Notice that ASMUSR and INTERNET are values that we chose arbitrarily; they do not have to match any other configuration value. 4. Page down until the display in Figure 154 is shown. Fill in the information as indicated in Figure 154. Change System Dir Attributes (CHGSYSDIRA) Type choices, press Enter. User-defined fields: Field name . . . . . . . . . . FORWARDING Character value, *SAME Product ID . . . . . . . . . . *NONE Character value, *NONE Function . . . . . . . . . . . > *ADD *ADD, *RMV, *CHG, *KEEP Field type . . . . . . . . . . *ADDRESS *DATA, *MSFSRVLVL, *ADDRESS Maximum field length . . . . . 256 1-512 Field name . . . . . . . . . . FWDSRVLVL Character value Product ID . . . . . . . . . . *NONE Character value, *NONE Function . . . . . . . . . . . > *ADD *ADD, *RMV, *CHG, *KEEP Field type . . . . . . . . . . *MSFSRVLVL *DATA, *MSFSRVLVL, *ADDRESS Maximum field length . . . . . 001 1-512 More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display Add Directory Entry Type choices, press Enter. User ID/Address . . . . ASMUSR AS2 Description . . . . . . Forward Mail to asmusr@asm.mycompany.com System name/Group . . . INTERNET F4 for list User profile . . . . . F4 for list Network user ID . . . . 159 Figure 154. Adding Directory Entry to Forward SMTP/MIME Mail Note: Address type MIME is equivalent to ATMIME. If the ATMIME option does not show on your system, select MIME. 5. Press F19 to enter the SMTP user ID and SMTP domain in the incoming mail to the mail hub. This must match the user ID and domain in the piece of mail relayed by the firewall to the secure mail server. Figure 155. Specify SMTP User ID and SMTP Domain as Received by the Secure Mail Server Press Enter. 6. Press F20 to specify the forwarding information shown in Figure 156. Add Directory Entry Type choices, press Enter. Mail service level . . 9 1=User index 2=System message store 4=Lotus Domino 9=Other mail service For choice 9=Other mail service: Field name . . . . FWDSRVLVL F4 for list Preferred address . . . 9 1=User ID/Address 2=O/R name 3=SMTP name 9=Other preferred address Address type . . . . ATMIME F4 for list For choice 9=Other preferred address: Field name . . . . FORWARDING F4 for list Specify User-Defined Fields Type choices, press Enter. SMTPAUSRID SMTP asmusr SMTPDMN SMTP as2.mycompany.com © Copyright IBM Corp. 1998 160 Figure 156. Specifying Mail Forwarding Information Press Enter to add the directory entry to the system distribution directory. 6.5.3 Verify the Firewall Installation and Configuration For information about Firewall installation and configuration, refer to Section 6.2.3, “Firewall Installation and Configuration” on page 133. After you install and configure the firewall, the network server description that contains the firewall configuration will point to the internal DNS. The firewall as a TCP/IP host belongs to your internal network (domain mycompany.com). Figure 157 shows the internal and external name servers configured in the firewall. The internal DNS IP address matches the name server Internet address configured in the AS/400 system where the firewall is installed. Notice that in this scenario, the internal name server is running on AS1 and the secure mail server is on the AS/400 system where the firewall is installed (AS2). The external DNS is usually the ISP DNS IP address specified during the firewall configuration. Figure 157. Firewall DNS and Secure Mail Server Configuration Specify User-Defined Fields Type choices, press Enter. FORWARDING asmusr@asm.mycompany.com FWDSRVLVL D isplay N etwork S erver Desc A S2 02/09/98 09:05:29 N etwork server description . . . . : FIRE W ALL Option . . . . . . . . . . . . . . : *TC PIP TCP /IP local host nam e . . . . . . : *NW SD TCP /IP local dom ain nam e . . . . . : *S YS TCP/IP nam e server system . . . . : *SYS Change TCP /IP Dom ain (C HG TC PDM N) Type choices, press Enter. H ost nam e . . . . . . . . . . . 'as2' D om ain nam e . . . . . . . . . . 'm ycom pany.com ' H ost nam e search priority . . . *R EM O TE *RE M O TE , *LO CA L, *SA M E Internet address . . . . . . . '10.5.69.222' Firewall configuration Network Server Description AS/400 TCP/IP Configuration ISP DNS 7 1 0 1 0 240 161 6.5.4 Internal DNS Configuration In our scenario, the primary server for mycompany.com in the secure network runs on AS1. The main aspects of the primary DNS configuration are: 1. Configuring all hosts in mycompany.com for name to address resolution. 2. Configuring the forwarders directive to forward name to address resolutions for hosts outside the mycompany.com domain to the firewall DNS. 3. Configuring the mail exchangers for the internal network. 4. Configuring Hosts in the mycompany.com Domain To verify the DNS configuration in our test environment: 1. Use Operations Navigator to get to the DNS Configuration for DNS Server - As1.mycompany.com. 2. Double-click on Primary Domains. 3. Right-click mycompany.com; the configured hosts names and IP addresses in the forward resolution file are shown in Figure 158. Figure 158. Content of Primary DNS for mycompany.com in AS1 If the company’s internal and public domain names are the same (as in this scenario, the domain name is mycompany.com both internal and public), you must configure address records for the public hosts in front of the firewall in the internal DNS server for the internal name server to resolve the public hosts names. If you do not add A records for the public hosts in the internal DNS server configuration when an internal client queries, for example, WWW.mycompany.com, the query receives a negative response. The internal DNS server looks at its own data since it is authoritative for mycompany.com and, if it does not find the WWW host in its own database, it does not forward the query to the firewall but returns a negative response instead. Tip © Copyright IBM Corp. 1998 162 In a similar fashion, you can display the content of the reverse mapping files 62.5.10.in-addr.arpa. and 69.5.10.in-addr.arpa. 6.5.4.1 Configuring Forwarders Pointing to the Firewall As explained in Section 6.2.5, “Configuring Forwarders in the Internal DNS” on page 140, the forwarders directive directs off-site queries to the IP address specified. In our scenario, we want to forward queries for hosts outside the mycompany.com domain to the firewall. To add or verify your forwarders configuration, use the following steps: 1. Use Operations Navigator to get to the DNS Configuration for DNS Server - As1.mycompany.com. 2. Right-click on DNS Server-As1.mycompany.com and select Properties. 3. Click on the Forwarders tab. Enter the firewall’s secure port IP address and . verify that the box Contact only forwarders for off-site queries is checked as shown in Figure 159. Figure 159. Adding the Firewall Secure Port IP Address to the Forwarders List 6.5.4.2 Configuring the Secondary DNS Server For backup and workload balancing purposes, we now configure AS2 as a secondary DNS for the mycompany.com domain. To configure AS2 as the secondary DNS on AS2, access the DNS configuration through Operations Navigator and use the following steps: 1. In the DNS server Configuration window, right-click on Secondary Domains (Figure 160). 163 Figure 160. Configuring AS2 as Secondary DNS for mycompany.com 2. Select New Secondary Domain. 3. Specify the domain name and IP address of the primary name server (Figure 161). Figure 161. Primary Domain Name and Name Server IP Address Repeat steps 1 through 3 to configure AS2 as secondary server for the 69.5.10.in-addr.arpa and 62.5.10.in-add.arpa domains. 4. Right-click on DNS server-As2.mycompany.com and select Properties. 5. Click on the Forwarders tab. Enter the IP address of the firewall secure port: 10.5.69.208 and verify that the box Contact only forwarders for off-site queries is checked. Figure 162 shows the AS2 DNS server boot file for this scenario. © Copyright IBM Corp. 1998 164 Figure 162. AS2 Mycomapany.com DNS Boot File /QIBM/UserData/OS400/DNS/BOOT 6.5.4.3 Configuring the Mail Exchangers in the Internal Network Mail sent by users in the secure network to other users in the secure network is routed to the appropriate mail server by the internal DNS server. See Appendix A.3, “Processing Inbound Mail” on page 437 and Appendix A.4, “Processing Outbound Mail” on page 438 for background information on this topic. In this scenario, we have three mail servers in the secure network: as1.mycompany.com , as2.mycompany.com, and asm.mycompany.com. The internal DNS server must route mail destined, for example, for asmusr@asm.mycompany.com to the ASM mail server. To configure the mail exchangers in the As1.mycompany.com DNS server, use the following steps: 1. Use Operations Navigator to get to the DNS Configuration for DNS Server - As1.mycompany.com. 2. Click + next to Primary Domains. 3. Double-click mycompany.com. 4. Select the host as1.mycompany.com on the right window and right click on it. 5. Select Properties. 6. Select the Mail tab. 7. Click on Add. 8. Enter AS1 in the Host name field. Click OK. See Figure 163 on page 165. This adds an MX record for AS1.mycompany.com. 165 Figure 163. Adding an MX Record for as1.mycompany.com Repeat steps 4 through 8 to add MX records for asm.mycompany.com and as2.mycompany.com. Figure 164 shows the DNS boot file for this scenario. Notice the forwarders directive and the forward-only option. Figure 164. Mycomapany.com DNS Boot File /QIBM/UserData/OS400/DNS/BOOT on AS1 Figure 165 shows the mycompany.com.db file for this scenario. 166 AS/400 TCP/IP DNS and DHCP Support Figure 165. mycompany.com.DB in /QIBM/UserData/OS400/DNS on AS1 Figure 166 shows the partial content of the QUERYLOG file. Notice the MX queries followed by A queries. Figure 166. QIBM/UserData/OS400/DNS/QUERYLOG 167 6.5.5 Considerations for Exchanging Mail with Internet Users As explained in Section 6.4.2, “Scenario Network Configuration” on page 154, for outbound mail, the firewall replaces the sender’s internal domain in the From: field by the public domain. For example, if user as1usr@as1.mycompany.com sends mail to an Internet user, the domain in the From: field is changed by the firewall to as1usr@mycompany.com. However, the firewall does not change the domain for users in the CC: list. If as1usr copies as2usr@as2.mycompany.com, the user in the Internet receives as2usr’s address unchanged. The Internet users cannot use the Reply all function to respond because the domain as2.mycompany.com is a domain not known in the Internet. Figure 167 illustrates this problem. Figure 167. Sending Mail to External Users and CC: Internet Users Figure 168 shows a piece of mail as it is received by the Internet user (msakai@ms.com). Notice the address in the CC: field (as2usr@as2.mycompany.com). Firewall From: as1usr@as1.mycompany.com To: msakai@ms.com CC: as2usr@as2.mycompany.com From: msakai@ms.com To: as1usr@mycompany.com as2usr@as2.mycompany.com From: as1usr@mycompany.com To: msakai@ms.com CC: as2usr@as2.mycompany.com From: as1usr@as1.mycompany.com CC: 168 AS/400 TCP/IP DNS and DHCP Support Figure 168. Mail Received by the Internet User Figure 169 shows the mail generated by the Reply All function. Notice the as2usr address. Figure 169. Reply All Function Using Internal Mail Address 6.5.6 Solving the CC: Problem One possible work around for the problem explained in Section 6.5.5, “Considerations for Exchanging Mail with Internet Users” on page 167 is to send mail internally to user@public_domain. If the user in the CC: list is as2usr@mycompany.com, there is no need to alter the domain, and the Reply All function from the Internet back to the original network works with no problems. To implement this solution, use the following steps: 169 1. Add mycompany.com as a local host alias in each internal mail server’s host table (Figure 170). Figure 170. Configuring mycompany.com Local Host Alias 2. Change the host name search priority to *LOCAL (Figure 171). Figure 171. Changing the Host Name Search Priority to *LOCAL 3. On each internal mail server, each local user must have two system distribution directory entries with the following fields (Table 5): Table 5. System Distribution Directory Entries for Local Users in AS1 UserID/System SMTP Name SMTP Domain Forwarding AS1USR/AS1 as1usr as1.mycompany.com AS1USR/INTERNAL as1usr mycompany.com as1usr@as1.mycompany.com Add TCP/IP Host Table Entry (ADDTCPHTE) Type choices, press Enter. Internet address . . . . . . . . > '10.5.69.222' Host names: Name . . . . . . . . . . . . . mycompany.com + for more values Text 'description' . . . . . . . Alias for local host Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . 'AS1' Domain name . . . . . . . . . . 'mycompany.com' Host name search priority . . . *LOCAL *REMOTE, *LOCAL, *SAME Internet address . . . . . . . '10.5.69.222' 170 AS/400 TCP/IP DNS and DHCP Support 4. On each internal mail server, each internal remote user must have a system distribution directory entry with the following fields (Table 6): Table 6. System Distribution Directory Entry for Remote Internal Users in AS1 5. At the mail hub (secure mail server), each user in the secure domain must have a system distribution directory entry with the following fields (Table 7): Table 7. System Distribution Directory Entry for All mycompany.comUsers in AS2 (Secure Mail) Table 8 summarizes the TCP/IP configuration for the internal mail servers in this scenario. Table 8. Solving the CC: Problem - TCP/IP Configuration Summary Table 9 shows mycompany.com as an alias for the local host in the internal mail servers. Table 9. mycompany.com Local Host Alias Figure 172 summarizes the system distribution directory configuration in each internal mail server. UserID/System SMTP Name SMTP Domain Forwarding AS2USR/INTERNAL as2usr mycompany.com as2usr@as2.mycompany.com UserID/System SMTP Name SMTP Domain Forwarding AS1USR/AS1 as1usr as2.mycompany.com as1usr@as1.mycompany.com TCP/IP Configuration AS1 AS2 ASM IP address (CFGTCP op. 1) 10.5.69.222 10.5.69.211 10.5.69.212 Host Name (CFGTCP op. 12) AS1 AS2 ASM Domain Name (CFGTCP op. 12) mycompany.com mycompany.com mycompany.com Host name search priority (CFGTCP op. 12) *LOCAL *LOCAL *LOCAL DNS Internet address (CFGTCP op. 12) 10.5.69.222 10.5.69.222 10.5.69.222 TCP/IP Configuration AS1 AS2 ASM IP interface address (ADDTCPHTE) 10.5.69.222 10.5.69.211 10.5.69.212 Host Name (ADDTCPHTE) mycompany.com mycompany.com mycompany.com 171 Figure 172. Solving the CC: Problem - System Distribution Directory Configuration 1. The firewall changes inbound mail from the Internet to asmusr@mycompany.com to asmusr@as2.mycompany.com. The SMTP server in AS2 decides that as2.mycompany.com is the local system and searches the local distribution directory for a user with the same SMTP name and SMTP domain name. The ASMUSR/AS2 directory entry is a match and the mail is forwarded to the user in the forwarding field (asmusr@asm.mycompany.com). 2. At asm.mycompany.com, there is a directory entry for a local user (ASMUSR/ASM) that matches the incoming SMTP name and SMTP domain with the forwarding field blank. The mail is delivered to that user in ASM. 3. When internal mail is sent from AS1 to asmusr@mycompany.com, the SMTP server in AS1 decides that mycompany.com is the local system per alias configuration. It searches the local system distribution directory for an SMTP name and SMTP domain match and it finds the ASMUSR/INTERNAL with the forwarding field to forward mail to asmusr@asm.mycompany.com. UserID/System SMTP name SMTP domain Fowardimg AS2USR/AS2 as2usr as2.mycomapny.com AS2USR/INTERNAL as2usr mycompany.com as2usr@as2.mycompany.com AS1USR/AS2 as1usr as2.mycompany.com as1usr@as1.mycompany.com AS1USR/INTERNAL as1usr mycompany.com as1usr@as1.mycompany.com ASMUSR/AS2 asmusr as2.mycompany.com asmusr@asm.mycompany.com ASMUSR/INTERNAL asmusr mycompany.com asmusr@asm.mycompany.com UserID/System SMTP name SMTP domain Fowardimg AS2USR/INTERNAL as2usr mycomapny.com as2usr@as2.mycompany.com AS1USR/AS1 as1usr as1.mycompany.com AS1USR/INTERNAL as1usr mycompany.com as1usr@as1.mycompany.com ASMUSR/INTERNAL asmusr mycompany.com asmusr@asm.mycompany.com UserID/System SMTP name SMTP domain Fowardimg AS2USR/INTERNAL as2usr mycomapny.com as2usr@as2.mycompany.com AS1USR/INTERNAL as1usr mycompany.com as1usr@as1.mycompany.com ASMUSR/ASM asmusr asm.mycompany.com ASMUSR/INTERNAL asmusr mycompany.com asmusr@asm.mycompany.com AS2-SDD AS1-SDD ASM-SDD 1 2 3 172 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 173 Chapter 7. Providing DNS Services on the Internet This chapter describes how you can configure a DNS server authoritative for multiple primary/secondary zones. We also explain how to configure the DNS server so it can forward queries directly to the Internet root name servers. 7.1 Scenario Overview In this scenario, we are configuring a DNS server that is authoritative for two unrelated domains and secondary to the firewall DNS that was configured in Section 6.2.3, “Firewall Installation and Configuration” on page 133. In this scenario, the AS/400 system we are configuring is an Internet service provider (ISP) DNS server. It provides DNS server services for a fee to its customers. In this scenario, we assume the domain names and IP addresses are registered with the InterNIC. Figure 173 outlines four domains: isp.net, inc.com, msu.edu, and mycompany.com. The ISP DNS server ASISP, which is located in the domain isp.net, is configured to be authoritative for inc.com and msu.edu. The firewall DNS within mycompany.com is authoritative for the public domain mycompany.com discussed in Chapter 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125. ASISP is configured to be secondary (that is, backup) to the Firewall DNS running on the Integrated PC Server installed on AS1. The DNS server running on ASISP2 is configured as the secondary DNS server to the primary name server ASIPS to back up the primary domains inc.com, msu.edu, and their respective in-addr.arpa domains. Figure 173. Scenario Network Diagram Internet Firewall Public Web Server WWW AS1 ASISP2 ASISP isp.net msu.edu mycompany.com inc.com 174 AS/400 TCP/IP DNS and DHCP Support 7.1.1 Scenario Objectives In this scenario, our objectives are to: 1. Configure a primary DNS server to be authoritative over two customers’ domain name spaces that are unrelated to one another. This includes configuring the forward mapping file and reverse (in-addr.arpa) mapping file for each customer. 2. Configure the same DNS server to be secondary to the mycompany.com’s firewall DNS server that was configured in Chapter 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125. 3. Configure the DNS server’s root servers to be the Internet root name servers. 4. Discuss configuring the ISP’s secondary DNS server to back up the primary domain files residing on the primary DNS server. 5. Briefly discuss the client configuration. 7.1.2 Scenario Advantages Many companies are starting to provide Web sites to advertise or sell goods or services. If the Web site allows Internet access, its domain name and Internet address must be contained in a primary domain that some DNS server is authoritative for. Some companies prefer to hire an Internet Service Provider (ISP) to provide the DNS name services they need rather than provide required systems and skills in-house. This scenario is an example of how and ISP can configure an AS/400 system to provide Internet DNS services to the ISP’s customers. This scenario discusses how to configure the root servers to be the name servers authoritative for the top-level domains. These Internet root name servers are crucial to name and address resolution on the Internet. 7.1.3 Scenario Disadvantages This scenario configuration of Internet root name servers is not necessary when an DNS server is authoritative for domains that are internal (that is, the domain is private). Earlier chapters discuss DNS server configuration scenarios for internal networks. 7.1.4 Scenario Network Configuration This chapter’s scenario focuses on domains with registered InterNIC names and IP addresses. The hosts that are configured in the DNS server are all on the Internet including the DNS server. The network in Figure 174 shows the ISP DNS servers ASISP and ASISP2 connected to the 7.10.10.0 network with a subnet mask of 255.255.255.0. ASISP is the primary DNS server in this scenario. Figure 151 on page 154 shows this ISP DNS server with an IP address of "Y", which is 7.10.10.240. ASISP2 is configured to be a secondary name server to back up the primary domain files residing on ASISP. The domain inc.com is contained on the network 11.5.6.0. with a subnet mask of 255.255.255.0. The domain msu.edu is located on network 12.5.6.0 with a subnet mask of 255.255.255.0. ASISP is configured to be primary for both of these domains. Providing DNS Services on the Internet 175 In Section 6.2.3, “Firewall Installation and Configuration” on page 133, the firewall configuration lists the non-secure domain name as mycompany.com. This is the public domain name for the hosts residing on the 8.9.10.0 network. See Figure 117 on page 127. The DNS running on firewall.mycompany.com at IP address of 8.9.10.11 is authoritative for this public domain of mycompany.com. This chapter’s scenario shows how to configure the ASISP name server to be secondary to the firewall DNS. Figure 174. Detailed Network Diagram 7.2 Task Summary The tasks required to complete this scenario do not include the initial TCP configuration on the AS/400 ASISP or ASISP2. This scenario assumes that the TCP configuration on both AS/400 systems in the network is complete and TCP connectivity has been verified. The summary of tasks for this scenario are as follows: 1. Create the primary domain files for inc.com on the ASISP name server. 2. Create the primary domain files for msu.edu on the ASISP name server. 3. Configure the root servers to be the Internet root servers on the ASISP name server. 4. Configure the secondary domain files for mycompany.com on ASISP to back up the DNS server running on firewall.mycompany.com. Internet Firewall WWW AS1 ASISP2 ASISP isp.net msu.edu mycompany.com inc.com 8.9.10.0 .11 .2 c1 c3 c2 M1 M3 M2 10.5.69.192 .208 .21 .22 .23 7.10.10.0 .241 .240 12.5.6.0 .12 .11 .13 11.5.6.0 176 AS/400 TCP/IP DNS and DHCP Support 5. Configure the secondary domain files on the secondary name server ASISP2 to back up the inc.com primary domain files on ASISP and the msu.edu primary domain files on ASISP. 6. Configure the root servers to be the Internet root servers on the secondary name server ASISP2. 7. Configure the clients in msu.edu and inc.com to use ASISP or ASISP2 as their DNS server. 7.2.1 Planning the ASISP Name Server Configuration Before configuring ASISP DNS server, you must decide its zone of authority, secondary DNS server, and register domains with the InterNIC. Zone of Authority and DNS Configuration Planning The process of setting up a DNS server should start with the careful planning of the zones of authority that the name server is configured for. The DNS servers ASISP and ASISP2 are located in the domain isp.net. We made a decision in the planning phase that the name servers ASISP and ASISP2 are not authoritative for the domain that they are located in. These two DNS servers are only authoritative for customer’s domains. Another DNS server in the isp.net is configured to be authoritative for the ISP’s own domain space isp.net. Figure 175 shows part of the Internet DNS name space tree. The DNS servers ASISP and ASISP2 are physically located in the isp.net domain. The zones they are authoritative for are also shown in the figure: mycompany.com, msu.edu and inc.com. The root name servers that ASISP and ASISP2 are configured with are the Internet root name servers, which can be thought of as being located at the top level Internet domain name space. These Internet name servers know where to go to resolve queries near the top of the tree such as the com node, the edu node, and so on, and how to work down the respective nodes by knowing who their child name servers are. Figure 175. Part of the DNS Name Space Tree com gov edu . . . mycompany.com isp.net org net msu.edu inc.com Zones of Authority " " Providing DNS Services on the Internet 177 Planning for the Secondary Name Server The primary domain files for inc.com and msu.edu are located on ASISP. The planning phase also includes deciding how these files should be backed up. We have decided that ASISP2 AS/400 is a secondary DNS server to the ASISP primary domain name server. For the sake of simplicity in describing the scenario, we have placed both ASISP and ASISP2 on the same physical network. However, in real life, we recommend that a secondary name server should be placed in a physically separate location from the primary name server and on a different network if at all possible. The reason for this is to eliminate as many single points of failure as possible. InterNIC Registration The domains and IP addresses we are using in this scenario should be registered with the InterNIC. The current form used to register the name of the domains such as isp.net, inc.com, msu.edu, and mycompany.com can be located at: http://rs.internic.net/rs-internic.html The ASISP DNS server is primary for the in-addr.arpa domains of 11.5.6.in-addr.arpa and 12.5.6.in-addr.arpa. These reverse domains also need to be registered with the InterNIC. The URL previously listed is also the location to get the current form used to register the in-addr.arpa domains. Part of the InterNIC registration includes listing the fully-qualified domain names and IP addresses of the primary and secondary DNS servers. Also, the DNS servers should be up and running and answering queries at the time the registration forms are submitted. The primary name server for the 10.9.8.in-addr.arpa file that is associated with mycompany.com is firewall.mycompany.com. We assume that the DNS administrator for mycompany.com has registered the 10.9.8.in-addr.arpa file with the InterNIC as well as the domain name mycompany.com. 7.2.2 Create the inc.com Primary Domain Files on ASISP If this is the first time the Operations Navigator DNS configuration is used on ASISP, the DNS configuration takes the user into the DNS configuration Wizard. For this scenario, the only thing the Wizard should be used for is to configure the localhost host with the IP address of 127.0.0.1. See Section 3.2.2, “Creating the Primary Name Server on As1” on page 29 for details on the Wizard windows. There are two primary domains that need to be created on ASISP for inc.com: inc.com. and 6.5.11.in-addr.arpa. To create inc.com, go into Operations Navigator DNS configuration on ASISP and right-click on Primary Domains. Click on New Primary Domain. In the next window that Operations Navigator presents, we must override the default domain name. Type in inc.com. (do not forget the trailing period after com). Although we are configuring the inc.com domain, the administrator for this domain is probably located in the domain that the AS/400 ASISP is located in, isp.net. Therefore, the default for Administrator’s e-mail may be correct: 178 AS/400 TCP/IP DNS and DHCP Support postmaster.ASISP.isp.net. Also, enable the Create and delete reverse mappings by default check box, which is located on the same window as the domain name. This causes the 6.5.11.in-addr.arpa primary domain file to be created automatically when the first new host is added to the inc.com primary domain. Click on OK. The inc.com primary domain file has been created but it contains no hosts. 1. Right-click on inc.com and select New Host. 2. Click on Add and type in the first host name: c1 and its IP address: 11.5.6.21. 3. Click OK twice. Notice that the reverse mapping domain of 6.5.11.in-addr.arpa has been automatically created. It also contains the c1 host. Repeat the procedure to add c2 and c3 to the inc.com domain the same way c1 was added. Every forward mapping primary domain file should include the host name of localhost with the IP address of 127.0.0.1. If the localhost host was not added with the Wizard, we need to add one last new host of localhost with IP address of 127.0.0.1 to the inc.com domain. Figure 176 shows the contents of inc.com primary domain file. Notice that the reverse mapping files of 0.0.127.in-addr.arpa and 6.5.11.in-addr.arpa are also listed as primary domains. Figure 176. Contents of inc.com Primary Domain File on ASISP Figure 176 also shows that the two primary domains inc.com and 6.5.11.in-addr.arpa are disabled by the hashing behind the icons to the left of The default e-mail address of the DNS administrator is the address of: postmaster.ASISP.ips.net. If we use this address, the AS/400 ASISP needs to have a user profile and a POP3 user for postmaster. Tip Providing DNS Services on the Internet 179 each domain file. Even if the name server is updated or restarted, the DNS server cannot use these files until they are enabled. To enable inc.com and 6.5.11.in-addr.arpa, right-click on each primary domain file and click on enable. Use the update server smart icon to refresh the DNS configuration with the new primary domain files. 7.2.3 Create the msu.edu Primary Domain Files ASISP For ASISP DNS server to support name and address resolution for the customer with the domain msu.edu, we need to create two new primary domains: msu.edu and 6.5.12.in-addr.arpa. The preceding two primary domain files should be created in the same way explained for the two primary domain files for inc.com in Section 7.2.2, “Create the inc.com Primary Domain Files on ASISP” on page 177. We then need to add new hosts to the msu.edu primary domain file: m1, m2, m3, and localhost The preceding hosts are added automatically in the 6.5.12.in-addr.arpa file if we enable the Create and delete reverse mappings by default option when we create the primary domain msu.edu. Figure 177 shows the contents of the msu.edu primary domain file. The figure shows both msu.edu and 6.5.12.in-addr.arpa domains disabled. Figure 177. Contents of msu.edu Primary Domain File on ASISP We now need to enable msu.edu and 6.5.12.in-addr.arpa primary domain files and click on the update server smart icon when we are ready to "go live" with the new DNS configuration. 7.2.4 Configure the Root Servers on ASISP The root name server is a configuration parameter that affects the entire DNS server configuration. This configuration is contained in the Properties of the name server itself. 180 AS/400 TCP/IP DNS and DHCP Support 1. Right-click on the DNS server-Asisp.isp.net and click on Properties. 2. Click on the Root Servers tab and click Load Defaults. The result is shown in Figure 178. The AS/400 DNS server support is shipped with an IFS file containing the Internet root name server list for the DNS administrator’s convenience. By clicking on the Load Defaults box, these Internet root name servers are placed in the /QIBM/UserData/OS400/DNS/CACHE file once the DNS configuration is saved or the server is updated. Figure 178. Load Default Root Servers on ASISP At this point click, on OK. If the name server is already started, click on the update server smart icon to refresh the configuration. If the name server is stopped, close the DNS window to save the configuration. The contents of /QIBM/UserData/OS400/DNS/CACHE is displayed in Figure 179. This figure displays the default Internet root name server list that was current at the time this redbook was written. This list is shipped in a file named ROOT.FILE located in the IFS directory: /QIBM/ProdData/OS400/DNS. The ROOT.FILE list is refreshed (if necessary) at every OS/400 release. Therefore, between releases, it is important for the DNS administrator to ensure that this list remains current. See the tip at the end of this section for more details. The /QIBM/UserData/OS400/CACHE file does not contain the query responses that the DNS server has cached but contains the information about the root name servers. Do not let the name of this file mislead you. Note Providing DNS Services on the Internet 181 Figure 179. Contents of CACHE File when Load Defaults Option is Taken 7.2.5 Create the Secondary Domain Files for mycompany.com on ASISP One of the objectives for the DNS configuration on ASISP is to be a backup to the firewall DNS server running on AS/400 AS1. The Firewall DNS is authoritative (that is, primary) for the public mycompany.com domain located in the 8.9.10.0 network. This is the public or non-secure side of the firewall. This configuration is described in Section 6.2.3, “Firewall Installation and Configuration” on page 133. By using the default Internet root name server list provided with the OS/400 V4R2M0 DNS option, you assume the default list is current. You can verify the list is current by downloading a new list from the Internet. To do this use, anonymous ftp to get the file named.root from the subdirectory of domain. This file is located on host ftp.rs.internic.net at IP address of 198.41.0.5. The AS/400 system shipped default Internet root name server list is stored in /QIBM/ProdData/OS400/DNS/ROOT.FILE. Tip 182 AS/400 TCP/IP DNS and DHCP Support To back up the firewall’s DNS server, we need to configure ASISP to be a secondary name server to the primary name server firewall at 8.9.10.11. To do this, follow these steps: On ASISP’s Operations Navigator DNS configuration: 1. Right-click on Secondary Domains. 2. Click on New Secondary Domain. 3. On the next window override the default domain by typing: mycompany.com. (Do not forget the trailing period after com.) 4. Click on Add. Type in the IP address of the master name server, which, in this case, is the non-secure port of the firewall: 8.9.10.11. Make sure the save copies of the master server data are enabled (there should be a check in the small box). Figure 180 shows how the DNS configuration of the secondary domain file of mycompany.com should look before clicking on OK. Figure 180. Creating a Secondary Domain File for mycompany.com. on ASISP 5. Click on OK. We are only half finished with providing a backup for the firewall DNS server. We still need to create a secondary domain file on ASISP for the 10.9 8.in-addr.arpa domain. 6. Right-click again on Secondary Domains. 7. Click on New Secondary Domain and override the default domain by typing:10.9.8.in-addr.arpa. 8. Click on Add. Type in the IP address of the master name server firewall: 8.9.10.11. 9. Click on OK. Providing DNS Services on the Internet 183 10.Click on update server smart icon to refresh the DNS configuration (or if the DNS server is stopped, close the DNS configuration window to save the configuration). Figure 181 shows the two secondary domains we have just created: 10.9.8.in-addr.arpa and mycompany.com. Figure 181. The Secondary Domains on ASISP to Backup Firewall.mycompany.com 7.2.6 Create the Secondary Domain Files on ASISP2 The ASISP2 DNS server is the secondary name server to the ASISP primary name server. Thus, we need to create four new secondary domains on ASISP2, which are: • inc.com. • 6.5.11.in-addr.arpa. • msu.edu. • 6.5.12.in-addr.arpa. In each case, the master server IP address needs to be the IP address of the primary name server ASISP, which is 7.10.10.240. Figure 182 shows the four secondary domains residing on the secondary name server ASISP2. We are almost finished configuring the secondary name server ASISP2. The last step is outlined in the next section. For a thorough example of configuring a secondary domain server, please see Section 3.2.6 on page 57. NOTE 184 AS/400 TCP/IP DNS and DHCP Support Figure 182. Secondary Domains on Secondary Name Server ASISP2 7.2.7 Configure the Root Servers on ASISP2 The ASISP2 DNS server does do more than back up the primary domain files that reside on ASISP. If the ASISP DNS server does not respond, the secondary name server ASISP2 needs to handle all of the queries that ASISP normally handles. That includes more than just the queries for information that the name server is authoritative for. ASISP2 also needs to know where to go when it does not have the answer to a query. ASISP2 needs to be configured to go to the same place that ASISP goes when it cannot answer a query: the Internet root name servers. Thus, the configuration steps outlined in Section 7.2.4 on page 179 also need to be repeated on ASISP2 to load the default Internet root server list. Lastly, when finished, the update server smart icon needs to be clicked to refresh the DNS configuration on ASISP2. Or if the DNS server is currently stopped, the DNS configuration window needs to be closed to save the new configuration. 7.2.8 Configure the Clients Once the DNS servers ASISP and ASISP2 are configured and started, and the domain names, in-addr.arpa files, and IP addresses are registered with InterNIC, the clients located in msu.edu and inc.com should be configured to use either ASISP or ASISP2 DNS servers. Both name servers are considered to be authoritative for msu.edu and inc.com. Half of the clients can be configured with ASISP’s IP address for its DNS server and the other half of the clients can be configured with ASISP2’s IP address for its DNS server. This balances the workload between the two name servers. Many clients can be configured with more than one IP address for its DNS server. If this is the case, both IP addresses should be listed, half the clients with ASISP’s IP address listed first and the other half with ASISP2’s IP address listed first. This also balances the workload between the two name servers. At this point, the primary name server ASISP should have its configuration updated to include an NS record stating that ASISP2 is secondary to the primary domain files. This should be done on ASISP with the Secondary Name Server tab of the Properties for each primary domain file. Details about this configuration step and why we recommend it is in Section 3.2.6.2 on page 58. Tip © Copyright IBM Corp. 1998 185 Chapter 8. DNS Server Tips, Tools, and Problem Determination This chapter describes tips to prevent DNS server problems, performance considerations, and how to identify problem symptoms, and use the appropriate diagnostic tools, logs, or traces to debug problems. 8.1 Tips and Tools This section contains some tips to prevent problems and tips for performance. It also describes the various tools available to a DNS administrator to troubleshoot DNS problems and how to use those tools. 8.1.1 Tips for Preventing Problems IFS File Ownership When the Operations Navigator GUI creates DNS files in the IFS directory /QIBM/UserData/OS400/DNS, the file is created with the owner value set to the AS/400 user profile that the user used to connect. If this user profile is deleted, all objects owned by this user profile are also deleted. Restarting the DNS Server As a DNS server queries other name servers for information it is not authoritative for, it stores the answers in its cache. That way if another client asks for the same query, the name server can supply the response from its cache instead of querying the authoritative name server again. Completely stopping and starting the DNS server clears the cache. Therefore, to keep a DNS server’s cache rich with information, avoid stopping and starting the name server needlessly. If a configuration change is made, the smart icon "update server" should be used to update the configuration without stopping and starting the name server. The "update server" smart icon does not clear the name server’s cache. The Postmaster Electronic Mail Address As we create a new primary domain, the Operations Navigator DNS configuration displays a default administrator’s e-mail address of: postmaster... The preceding e-mail address is the address of the person responsible for the data in this zone. If we keep the default, the preceding e-mail address is inserted in the SOA record of the primary domain file. The AS/400 system does not automatically create a user profile or a POP3 system directory entry of postmaster. Thus, the DNS administrator needs to remember to manually create a POP3 directory entry of postmaster and add an associated SMTP system alias table entry of: To prevent accidentally deleting the DNS files (if the user’s user profile is deleted), change the ownership of the files to a system-supplied user profile such as QTCP. Tip 186 AS/400 TCP/IP DNS and DHCP Support SMTP UserId = postmaster SMTP Domain Name = .. Manually editing the DB files in DNS IFS directory: Every primary domain file configured with Operations Navigator’s DNS configuration results in the creation of a file with a .DB extension in the IFS directory /QIBM/UserData/OS400/DNS. We recommend that you do not manually edit these files. If changes need to be made, use the Operations Navigator DNS configuration displays. The reasons for this recommendation are as follows: • Every DB file contains an SOA record. This record contains the serial number that secondary name servers check to make sure their secondary domain files are at the same level as the primary domain files. Therefore, when changes are made to the primary domain files, the Operations Navigator DNS configuration automatically increments this serial number. If the DB files are edited manually, it is up to the DNS administrator to remember to manually increment the serial number in the SOA record. • When the Operations Navigator DNS configuration is used to add (or delete) hosts to the forward mapping primary domain files, Operations Navigator automatically adds (or deletes) the host to the corresponding reverse mapping (in-addr.arpa) file if the enable Create and delete reverse mappings by default is checked. This saves the DNS administrator configuration time and prevents inadvertently omitting the hosts in the in-addr.arpa file. This convenience is lost when primary domain files are manually edited. A DNS administrator must remember to add or delete a host from the in-addr.arpa file as well as the forward mapping file. • When a change is made in the DB files, the DNS server needs to be stopped and started or restarted to pick up the change. The AS/400 STRTCPSVR SERVER(*DNS) RESTART(*DNS) command stops and starts the DNS server, which causes the cache to be cleared - we do not recommend doing this. The recommended method to pick up configuration changes is to use the Operations Navigator DNS configuration "update server" smart icon. The configuration is then refreshed without clearing the cache. Since Operations Navigator should to be used to pick up the configuration changes, the DNS administrator should make the changes with this GUI in the first place. The LOCALHOST Host Every primary forward mapping domain file created for a DNS server should include the host localhost with an IP address of 127.0.0.1. Consequently, every DNS server should have a reverse mapping domain 0.0.127.in-addr.arpa created also. 8.1.2 Tips for Performance Use this list of tips if you are concerned about performance: 1. SOA record on a primary domain identifies the frequency of zone transfers. See page 79 in the second edition of DNS and BIND by Albitz & Liu, or RFC 1537 and RFC 1912 for more information. 2. The default TTL (time to live) value in the SOA record for a primary domain identifies how long a RR record will stay in the cache of another server. A longer value can will create less network traffic. Operations Navigator GUI Default = 1 day DNS Server Tips, Tools, and Problem Determination 187 RFC 1537 recommends 4 days RFC 1912 considers 1 to 5 days typical and recommends 3 days and considers this timer value the most important. DNS and BIND by Albitz & Liu, second edition considers 3 hours aggressive, 1/2 day reasonable, 8-24 hours possible. Temporary changes to this time can be made when major updates are planned. The TTL value on individual resource records that are referred to, but do not change often, can be configured for a longer time. The individual TTL value on a resource record overrides the TTL value in the SOA record. In this case, long times (1 to 2 weeks) are reasonable for an MX, an A, and a PTR record for mail hosts, a NS record of a zone, and an A record of name server. 3. The refresh timer in the SOA record is the time a secondary waits to check if an update is needed: Operations Navigator GUI default = 3 hours RFC 1537 recommends 24 hours. RFC 1912 considers 20 min to 2 hours short and 2 to 12 hours long. 4. The retry timer on the SOA record is the time a secondary waits before re-attempting a refresh if the refresh query fails. Operations Navigator GUI default = 1 hour RFC 1537 recommends 2 hours. RFC 1912 justs says to use a fraction of the refresh timer. 5. The expire timer is the time a secondary stops using its data to answer queries and must complete a zone transfer to continue to answer queries. Operations Navigator GUI default = 7 days RFC 1537 RECOMMENDS 30 days. RFC 1912 suggests 2-4 weeks, longer than a major outage. 6. The round robin function of the DNS server will perform a simple form of load sharing, however, this is not load balancing. For details, see pages 211 and 212 in the second edition of DNS and BIND by Albitz & Liu. The round robin function is reasonable for teminal servers, FTP servers, or Web servers. The recommedation is to reduce the TTL for these hosts’ resource records so they do not stay in cache. 7. Running with a debug level of greater than 3 significantly increases the startup time of your DNS server. 8. We recommend you locate your name server on a network with the most traffic. • Create additional name servers to increase performance. The creation of additional secondary or cache-only servers can reduce the load on the primary and first secondary name servers. • RFC 1912 recommends not configuring secondaries to get their zone transfer from another secondary. 9. The DNS Server Statistics information can be used to calculate information to determine how your DNS server is performing. 188 AS/400 TCP/IP DNS and DHCP Support • The Stats information can also be used to determine exactly where all queries received are coming from. The statistics are reported as totals (the global numbers) and by requesting address. • You can make a calculation using some of the data in the Statistics dump to determine how busy the name server is. An example of this calculation is in a Tip at the end of Section 8.1.6, “Dump Server Statistics” on page 194. 10.Page 272 in the second edition of the DNS and BIND by Albitz & Liu warns that a bad connection or a network outage may be masqueraded as poor DNS server performance. Use the debug tool, and ping to check if there are addresses that are never responding. 11. AS/400 considerations: • Some DNS reference materials refer to STATS information that is logged every hour automatically and placed in a job log or the name server’s equivilant to an AS/400 job log. This is not the case with the AS/400 system’s DNS server. To view STATS information, you must use the Operations Navigator GUI to manually dump the STATS information. For more information and a sample STATS output, see Section 8.1.6, “Dump Server Statistics” on page 194 . • The run priority for the DNS QTOBDNS job in V4R2 defaults to 50. 8.1.3 Tools for Problem Determination In this section, several tools are documented for troubleshooting DNS problems. However, some of these tools should only be used on rare occasions and some only if instructed to do so by AS/400 Software and Service Support. The most important debug methods in order of usefulness is as follows: 1. Operations Navigator DNS Configuration displays: The Operations Navigator DNS Configuration displays should be used to check for completeness, for mis-typed domain names, mis-typed host names, and mis-typed IP addresses. From Operations Navigator, the DNS administrator can make sure the server has been updated after a DNS configuration change has been made, ensure newly created domains are enabled, and verify the server is started. 2. The AS/400 job logs: After configuring a name server for the first time or after any major configuration changes, always review the QTOBDNS job log for errors after the name server has been started. Many DNS configuration errors cause errors to be posted in this job log. Thus, this job log is a critical tool when debugging a DNS problem. 3. Nslookup interactive tool: Nslookup is a way to pose queries to the name server and view its responses interactively. This tool can be useful of you suspect a query is not being resolved the way you think it should be. It is also useful as an informal testing mechanism after a name server is first configured. 4. Querylog file in AS/400 IFS: This is a log of the queries the name server has received. It can be useful to verify that the query from a client actually made it to the name server (that is, DNS Server Tips, Tools, and Problem Determination 189 the TCP connectivity exists and the client is indeed sending the query as you expect). Querylog is sometimes useful when debugging mail delivery problems. With the exception of Operations Navigator DNS configuration, the preceding tools are discussed in more detail in the following sub-sections as well as some additional tools that have less importance when troubleshooting a DNS problem. 8.1.4 AS/400 Job Logs 8.1.4.1 The Active QTOBDNS Job Log With so much time spent on Operations Navigator to configure the AS/400 DNS server, it is easy to overlook one of the most informative logs for DNS problem determination: the AS/400 job log of the DNS job: QTOBDNS. If the DNS server is started, the QTOBDNS job is active and running under the QSYSWRK subsystem. You can locate it with the following AS/400 command: WRKACTJOB SBS(QSYSWRK) Once QSYSWRK’s active jobs are displayed, page down until you find the job named QTOBDNS. Choose option 5 in front of this job to work with the job and on the next display, choose option 10 to display the job log. When the job log is shown, press F10 to display the detailed messages. The bottom of the job log is displayed. You may need to page up to find error messages logged at the time you had a problem. If you find an error message in the job log that needs investigating, you can see more details by placing the cursor on the message itself and pressing F1 for help. When you finish configuring the AS/400 DNS server and start the DNS server for the first time, we highly recommend that you review the QTOBDNS job log for errors that are logged when the DNS server starts. For example, spelling errors in the names of primary domain files and spelling errors in the domain names of hosts cause error messages to be posted to the QTOBDNS job log. Tip When you finish configuring a secondary name server and start it for the first time, we highly recommend that you review both QTOBDNS job logs: the QTOBDNS job log on the secondary name server and the QTOBDNS job log on the primary name server. Remember that if a secondary name server’s configuration has the Backup Primary Domain Files checked off, the secondary name server is booted using the backup files and then attempts to do the zone transfer from the primary name server. In this case, the zone transfer can fail but the secondary name server is started. This is actually good from the standpoint of availablility, but the point is that the DNS administrator never knows the zone transfer failed without reviewing the QTOBDNS job log on the secondary name server. Tip 190 AS/400 TCP/IP DNS and DHCP Support 8.1.4.2 The Inactive QTOBDNS Job Log Sometimes it is necessary to review the job log of QTOBDNS after the DNS has been stopped and the job QTOBDNS has ended. Or, even more importantly, the DNS server is having such a severe problem that the QTOBDNS job starts and ends before you have a chance to review the active QTOBDNS job log. To find the job log of a job that is no longer active, you need to find the job’s spooled output. It helps to know the user that job runs under. The DNS jobs run using the QTCP user profile. Therefore, use the following command: WRKSPLF QTCP If the job has recently ended, the job’s spooled file may be near the bottom of the resulting list. Use F18 to go to the bottom of the Work With Spooled Files List. The name of the job is listed under the User Data column. 8.1.4.3 The QTOBXFER Job The QTOBXFER job is a job that starts and is active on the secondary name server for the duration of a zone transfer. This job typically starts and ends quickly and to review its job log, you need to review the job’s spooled files because the job has usually ended. The job runs using QTCP user profile; therefore, use the WRKSPLF QTCP command and F18 to go to the bottom of the list to locate the spooled file containing the ended job’s job log. If the secondary name server is configured to have three secondary domains as it was in Chapter 3.2.6, “Creating a Secondary DNS Server” on page 57, three QTOBXFER jobs will start. There is always one QTOBXFER job for each domain file that is being zone transferred. 8.1.4.4 The QTOBXMIT Job The QTOBXMIT job is a job that starts and is active on the primary name server for the duration of a zone transfer. This job typically starts and ends quickly and to review its job log, you need to review the job’s spooled files. See previous sections for how to display a job’s spooled files and locate a job log of an ended job. 8.1.5 NSLOOKUP The AS/400 Name Server Lookup program (nslookup) is a program that allows you to interactively simulate a client to query the DNS server and view the If a zone transfer fails, look in the QTOBXFER job log. Also, always review the QTOBDNS job logs on both the secondary and the primary name servers when troubleshooting. Tip If a zone transfer fails, look in the QTOBXMIT job log. Also, always review the two QTOBDNS job logs, one on the secondary name server and one on the primary name server. Tip DNS Server Tips, Tools, and Problem Determination 191 responses. You can use the nslookup program interactively by entering the following AS/400 command: CALL PGM(QDNS/QTOBLKUP) After entering the command, the AS/400 display should look similar to Figure 183. Figure 183. Initial NSLOOKUP Display If you get an error message initially instead of the Default Server and Address as in Figure 183, then check “Problem Symptom 2:” on page 208. With nslookup, you can query your DNS server and make sure it is giving out the responses you expect it to. When you enter nslookup, the type of query that nslookup defaults to is the A record query, which is: type in a host name and the server will respond by giving the IP address of that host. Several examples of nslookup queries are listed in Chapter 5.5.8, “Verifying DNS with Name Server Lookup” on page 111. Some of the other types of queries that nslookup accepts are: SET TYPE=MX After the set type=mx command is entered, any text entered subsequently causes the name server to be queried for MX records for the host or the domain you typed in. For example, if a wildcard MX record of: *.mycompany.com IN MX AS1.mycompany.com. was entered in the primary domain file of mycompany.com. (see Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44 for an example on how to do this). This allows a query of .mycompany.com. to be responded to with the information that AS1.mycompany.com. is the mail Default Server: as1.mycompany.com Address: 10.5.69.222 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window 192 AS/400 TCP/IP DNS and DHCP Support server as long as does not have an A record configured for itself. Figure 184 shows an MX query for host asx and the name server’s response. The type had already been set to MX before the query was entered. Figure 184. Mx Query Using Nslookup Note: When viewing the results of nslookup, be aware that the text to the right of a > is what the user typed in. The text that does not have a > to the left of it is text that nslookup displayed. After a query is typed by the user, nslookup always first lists the name server and the name server IP address that is giving the response; then nslookup lists the name server’s response below. SET TYPE=PTR The preceding command allows subsequent commands to query PTR records. In other words, enter an IP address and get a response supplying the host name. LS -D (for example, ls -d mycompany.com) The preceding command is querying the name server for all the information it knows about the domain file listed. In the example command, this domain is mycompany.com. Figure 185 on page 193 shows the nslookup query and the name server’s response. > > asx.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 asx.mycompany.com preference = 0, mail exchanger = as1.mycompany.com mycompany.com nameserver = as1.mycompany.com as1.mycompany.com internet address = 10.5.69.222 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window DNS Server Tips, Tools, and Problem Determination 193 Figure 185. Nslookup Result of ls -d mycompany.com Command Figure 186. Two Queries for Otherserver Figure 186 shows the nslookup results of two queries for OTHERSERVER.OTHERDOMAIN.mycompany.com. If you remember the details from Chapter 5.5 on page 96, the child name server OTHERHOST is authoritative for the hosts located in OTHERDOMAIN. The name server in AS1 is authoritative for mycompany.com. Thus, the first time nslookup queries the name server AS1 for OTHERSERVER, the name server AS1 must query the child server OTHERHOST on behalf of nslookup and then return OTHERHOST’s response to nslookup. Because the answer was really from the OTHERHOST name server, the response back to nslookup is considered an "authoritative" response. The AS1 name server then caches this answer so the second time we use nslookup to submit the same > > ls -d mycompany.com. [as1.mycompany.com] mycompany.com. SOA as1.mycompany.com postmaster.as1.mycomp any.com. (888531153 10800 3600 604800 86400) mycompany.com. NS as1.mycompany.com mycompany.com. NS as5.mycompany.com as5 A 10.5.69.221 otherdomain NS otherhost.otherdomain.mycompany.com otherhost.otherdomain A 10.1.1.2 p23gb74 A 10.5.62.187 * MX 0 as1.mycompany.com as1 A 10.5.69.222 p23thkp1 A 10.5.69.204 as2 A 10.5.69.211 mycompany.com. SOA as1.mycompany.com postmaster.as1.mycomp any.com. (888531153 10800 3600 604800 86400) > > > otherserver.otherdomain.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 Name: otherserver.otherdomain.mycompany.com Addresses: 10.5.69.207, 10.1.1.7 > > otherserver.otherdomain.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 Non-authoritative answer: Name: otherserver.otherdomain.mycompany.com Addresses: 10.5.69.207, 10.1.1.7 > ===> F3=Exit F4=End of File F6=Print F9=Retrieve F17=Top F18=Bottom F19=Left F20=Right F21=User Window 194 AS/400 TCP/IP DNS and DHCP Support query to the AS1 name server, it returns the response directly from its cache. Thus, the second response listed in Figure 186 is considered non-authoritative. Any time a response is labeled non-authoritative, it is a response that came out of a name server’s cache. For more information on how to use nslookup, see the DNS chapter in the TCP/IP Configuration and Reference, SC41-5420-01. 8.1.6 Dump Server Statistics Dumping the name server statistics can tell you how busy your name server is and can help a DNS administrator balance the workload between primary and secondary name servers. The first time the name server statistics is dumped, a file named STATISTICS is created in the /QIBM/UserData/Os400/DNS directory. Subsequent requests to dump the server statistics causes additional information to be added to the same file. When the server statistics is dumped, a pop-up window displays the server statistics in the Operations Navigator DNS configuration window. However, the dump file is easier to view when you use Operations Navigator File Systems to view the /QIBM/UserData/OS400/DNS/STATISTICS file using a program such as Netscape. To dump the name server statistics, follow these steps: 1. Use Operations Navigator to go into the DNS configuration. 2. Click on View. 3. Click on Server Statistics (see Figure 189 on page 201; the same pull-down menu contains the option to dump the active server database). 4. After a short wait, a pop-up window is shown containing the server statistics. 5. If you have a program on your client such as Netscape, you can use Operations Navigator File Systems to open the STATISTICS file with Netscape and view it with a much larger window. Again, the STATISTICS file is located in the /QIBM/UserData/OS400/DNS directory. The following example shows a Statistics dump taken from a STATISTICS file. Please note that the name server the dump was taken from was a test name server and, therefore, not very busy. +++ Statistics Dump +++ (888832902) Mon Mar 2 10:01:42 1998 241630 time since boot (secs) 241630 time since reset (secs) 6 Unknown query types 387 A queries 3 NS queries 1 CNAME queries 59 SOA queries 301 PTR queries 49 MX queries DNS Server Tips, Tools, and Problem Determination 195 2 AXFR queries 41 ANY queries ++ Name Server Statistics ++ (Legend) RQ RR RIQ RNXD RFwdQ RFwdR RDupQ RDupR RFail RFErr RErr RTCP RAXFR RLame ROpts SSysQ SAns SFwdQ SFwdR SDupQ SFail SFErr SErr RNotNsQ SNaAns SNXD (Global) 849 2 0 2 2 2 0 0 0 0 0 4 2 0 0 0 843 2 2 0 0 0 0 463 550 27 [10.5.62.187] 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 8 6 0 [10.5.69.208] 432 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 432 0 0 0 0 0 0 432 257 0 [10.5.69.217] 61 0 0 0 0 0 0 0 0 0 0 4 2 0 0 0 57 0 0 0 0 0 0 4 0 0 [10.5.69.221] 331 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 331 0 0 0 0 0 0 2 282 26 [10.5.69.222] 17 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 15 0 2 0 0 0 0 17 5 1 [10.1.1.2] 0 2 0 2 0 2 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 -- Name Server Statistics -- --- Statistics Dump --- (888832902) Mon Mar 2 10:01:42 1998 We have defined some of the more useful statistics in the previous dump: • 241630 seconds have elapsed since the name server was started. • 241630 seconds have elapsed since the name server was last updated. • The name server received six queries of which it did not recognize the type. 196 AS/400 TCP/IP DNS and DHCP Support • The name server has received 387 A record queries since the name server started. This is usually the most common type of query: an address lookup based on a host name. • The name server has received three NS queries. A name server can do an NS query when it is trying to look up a name server for a root domain, or an application such as NSLOOKUP can do a NS query. • The name server has received one CNAME query. A CNAME query is a query that is trying to find out about aliases for a host. • The server has received 59 SOA record queries. A secondary name server queries the SOA record of the primary name server to verify its domain files are at the latest level. • The name server has received 301 PTR queries. These are reverse look up queries; the client has the IP address and is querying for the host name. • The name server has received 49 MX record queries. This query type is used for mail. • The name server received two AXFR queries. A secondary name server sends a AXFR query to initiate a zone transfer. • The name server received four ANY queries. This type of query is used to gather any type of information the name server has for a particular host name. This includes A records, CNAME records, and MX records. Sendmail uses this type of query. The IBM Firewall for AS/400 uses sendmail. The next section of the dump is the legend, which is the key that needs to be used to identify the string of numbers listed below the legend. For example, the first number under the global heading is 849, which according to the first term in the legend, is RQ. The second number under global is 0, which is RR, the second term in the legend, and so on. This particular statistics dump was taken on AS1 with a configuration similar to the one described in Chapter 5.5. Thus, AS1 is a parent server, primary for mycompany.com. Here we identify the non-zero global numbers: • 849 - RQ: The name server has received 849 queries total since it was started. • Two - RR: This is the number of responses the name server has received. These are responses for queries this name server has sent. • Two - RNXD: This number is how many "no such domain" answers this name server received. • Two - RFwdQ: The number of queries this name server received that needed additional processing before they could be answered. • Two - RFwdR: The number of responses this name server received that answered the original query and were passed back to the application that made the query. • Four - RTCP: The name server has received four queries over TCP/IP protocol instead of UDP. • Two - RAXFR: There were two zone transfers initiated by another secondary name server to this name server. • 843 -SAns: The count of responses sent by this server. • Two - SFwdQ: The number of queries that were sent by this name server to another name server when this name server did not have the answer in its domain data or its cache. DNS Server Tips, Tools, and Problem Determination 197 • Two - SFwdR: The number of responses from another name server that were forwarded to some other name server or client. • 463 - RNotNsQ: The count of queries that were not from other name servers. • 550 - SNaAns: The number of non-authoritative responses (that is, from a name server’s cache) sent by this name server. • 27 - SNXD: The number of "no such domain" responses sent by this name server. There are similar strings of numbers below the Global numbers that are a break out of the same statistics for individual clients identified by IP address. For more information on the Statistics dump and a complete definition of the legend, see pages 140-149 in the second edition of DNS and BIND by Albitz & Liu. 8.1.7 Run Debug For troubleshooting purposes, you may be instructed by AS/400 Software Service and Support to run your name server in debug mode. We recommend that you run with the debug level of zero unless instructed to run with a higher level by AS/400 Software Service and Support. The default level for debug is zero, which saves no debug information. To increase the level to debug, use the following steps: • Use Operations Navigator to go into the DNS configuration. • Right-click on the DNS server - . • Click on Properties. • Use the Up arrow to increase the Debug level to a value from 1 to 11. A debug level of 3 is a good starting place for troubleshooting. The higher the debug level is, more information is recorded. See Figure 187. • Stop and start the name server to pick up the debug level change. • Create the failure or DNS problem you are attempting to troubleshoot. • A debug level greater than 0 creates a file named RUNDEBUG in the /QIBM/UserData/OS400/DNS directory. • Review this file for help in troubleshooting or deliver the file to AS/400 Software Support per their instructions. • After capturing the problem with the appropriate debug level, go back into the DNS configuration, turn debug level back down to 0, and stop and start the DNS server to pick up the configuration change. To determine how busy your server is, a helpful calculation is taking the Global RQ (the number of queries the server has received) and divide it by the number of seconds the server has been active. This gives an average queries per second. In the previous sample statistics dump, the calculation is: 840/241630 queries/second, which is 0.208 queries per minute or 12.5 queries per hour (not a very busy name server). We recommend that you make the preceding calculation for your name server and review the AS/400 free disk space before changing the name server’s debug level or enabling logging on your name server. Those two name server tools are explained is the next sections. Tip 198 AS/400 TCP/IP DNS and DHCP Support • Once the problem has been debugged and the RUNDEBUG file is no longer needed, delete it to free up the AS/400 disk space. Be Careful: A busy name server with debug level set to a level higher than 0 can cause the RUNDEBUG file to become large quickly. Before using a debug level of greater than 0, check the status of AS/400 disk space to make sure the system has space for a large file and check the activity of the name server with a Statistics dump. Figure 187. Debug Level Setting Change The following excerpt is from the RUNDEBUG on AS1 name server. An A record query was sent to AS1 from a client with IP address 10.1.1.2. The query was for: AS2.mycompany.com. As you can see, one A record query caused 16 lines to be logged in the RUNDEBUG file. This is with only a debug level of 3. The response to the A record query was IP address 10.5.69.211. Notice that a level of 3 did not cause the IP address in the response to be logged in the RUNDEBUG file. If the response the name server gives needs to be reviewed, you can use the nslookup interactive tool (the recommended method) or a higher level of RUNDEBUG can be used. Excerpt from RUNDEBUG as a Result of One A Record Query datagram from [10.1.1.2].53, fd 7, len 35; now Fri Feb 27 11:08:56 1998 req: nlookup(as2.mycompany.com) id 50914 type=1 class=1 req: found 'as2.mycompany.com' as 'as2.mycompany.com' (cname=0) wanted(SPP:0000 :1aefQTOBDNS QTCP 023250 :39040:33:22, 1, 1) [IN A] finddata: added 1 class 1 type 1 RRs req: foundname=1, count=1, founddata=1, cname=0 DNS Server Tips, Tools, and Problem Determination 199 sort_response(1) findns: SOA found req: leaving (as2.mycompany.com, rcode 0) free_nsp: as1.mycompany.com rcnt 1 findns: 1 NS's added for 'mycompany' free_nsp: as1.mycompany.com rcnt 1 doaddinfo() addcount = 1 do additional "as1.mycompany.com" (from "mycompany.com") found it ns_req: answer -> [10.1.1.2].53 fd=7 id=50914 size=145 Local For more information on reading the RUNDEBUG file, see Pages 237-256 in the second edition of DNS and BIND by Albitz & Liu. Important: We recommend that you do not run your DNS server with a debug level greater than zero unless instructed to do so by AS/400 Software Service and Support. Depending on how busy your name server is, any debug level greater than zero can cause the RUNDEBUG file to become large. Also, after the file has been used to troubleshoot the problem, we recommend deleting the file to free up the AS/400 disk space. 8.1.8 DNS Server QUERYLOG It is possible to log all queries to the DNS server in a file named QUERYLOG, which is located in the /QIBM/UserData/OS400/DNS directory. If a DNS server is busy, the QUERYLOG can become quite large very fast; therefore, we recommend that you only turn the logging on after you have monitored and reviewed the summary DNS server statistics and determined that your AS/400 system has enough file space if the QUERYLOG should become large. To turn on logging, use the following steps: 1. Right-click on DNS Server-. 2. Click on Properties. 3. Click on the Options tab. 4. Place a check in the small box labeled: log all queries received by name server. See Figure 188. 5. Stop and start the name server for the change to take effect. 6. After you finish troubleshooting your problem, go back into DNS server properties, disable logging, stop and start your DNS server to pick up the configuration change, and delete the QUERYLOG file after it is no longer needed to free up disk space. 200 AS/400 TCP/IP DNS and DHCP Support Figure 188. Enable Query Logging The file QUERYLOG can be reviewed using Operations Navigator file systems and a program such as Netscape. The following example shows a portion of a QUERYLOG file: Excerpt from QUERYLOG XX /10.5.69.222/OTHERDOMAIN.MYCOMPANY.COM.mycompany.com/A: Tue Feb 24 15:49:16 1998 XX /10.5.69.222/OTHERDOMAIN.MYCOMPANY.COM/A: Tue Feb 24 15:49:16 1998 XX /10.5.69.222/otherdomain.mycompany.com/MX: Tue Feb 24 15:49:18 1998 XX /10.5.69.222/otherhost.otherdomain.mycompany.com/MX: Tue Feb 24 15:49:19 1998 XX /10.5.69.222/otherhost.otherdomain.mycompany.com/A: Tue Feb 24 15:49:19 1998 The preceding query log was from the AS1 parent name server running on an IP address of 10.5.69.222. The configuration was that of 5.6, “Mail Between Otherdomain.mycompany.com and Mycompany.com” on page 116. Notice that the queries were coming from 10.5.69.222 (the IP address listed to the left in each line of QUERYLOG is the source of the query), which is the AS/400 system the name server running on. This means an application on the AS/400 system is querying the name server. In this case, the application was SMTP and Mail Framework. Mail was arriving on AS1 (as a PC client’s SMTP outgoing mail server) but the mail was destined for a POP mailbox on the OTHERHOST AS/400 system. Thus, the SMTP and Mail Framework applications on AS1 were querying the name server in an attempt to determine where to send the mail. By understanding the configuration steps in 5.6, “Mail Between Otherdomain.mycompany.com and Mycompany.com” on page 116, you can DNS Server Tips, Tools, and Problem Determination 201 guess that of the five queries previously listed, only the last one received a positive response from the AS1 name server. The tool QUERYLOG is of minor importance except in one area since the log does not contain the responses that the name server is giving out, only the queries the name server is receiving. The one area in which QUERYLOG has greater importance in troubleshooting is in mail problem determination. When SMTP and Mail Framework applications are attempting to deliver mail, they also make queries to the name server. QUERYLOG can be used to identify which queries SMTP and Mail Framework are sending to the name server. The IP address of the client sending the query is the IP address of the AS/400 system itself when SMTP and Mail Framework are the applications doing the querying. The queries listed in QUERYLOG can then be jotted down and nslookup can then be used with those same queries to determine what name server responses the SMTP and Mail Framework applications are receiving. Additional information on debugging mail is contained in Section 8.1.10, “Tips on Debugging Mail on an AS/400 System” on page 202. 8.1.9 DNS server Dump Database It is possible to dump the DNS server database for troubleshooting purposes if AS/400 Software Support instructs you to do so. The resulting file contains the complete configuration of the DNS server as well as the contents of the name server’s cache. Usually, the contents of the name server’s cache is not the cause of a problem and the configuration itself can be easily reviewed using Operations Navigator; thus, the Dump Database is a troubleshooting tool of relatively minor importance. Within Operations Navigator DNS configuration, there is a smart icon labeled Database. Clicking on this icon causes the name server database to be dumped. An alternative method to dump the database is to: • From Operations Navigator DNS Configuration, click on View. • Click on Active Server Database. See Figure 189. It takes a few seconds to finish dumping the database. Figure 189. Dumping the Active Server Database When the active name server database is dumped, a file named DUMPDB is created in the /QIBM/UserData/OS400/DNS directory. The following few lines are an excerpt from the DUMPDB file on OTHERHOST.OTHERDOMAIN.mycompany.com. A few queries were posed to OTHERHOST, which caused it to query the parent server AS1 at IP address of 10.5.69.222. OTHERHOST cached the responses from AS1. The following portion of 202 AS/400 TCP/IP DNS and DHCP Support DUMPDB is the contents of OTHERHOST’s cache. Cached information can be identified by the credibility tag: Cr= Thus Cr=auth [10.5.69.222] on the right-hand side of the line indicates that the information is a cached authoritative response from an IP address of 10.5.69.222. any 86357 IN MX 0 as1.mycompany.com. ;Cr=auth [10.5.69.222] p23thkp1 86332 IN A 10.5.69.204 ;Cr=auth [10.5.69.222] as1 86320 IN A 10.5.69.222 ;NT=42 Cr=auth [10.5.69.222] For more information on reading the Database Dump, please see Pages 260-263 in the second edition of DNS and BIND by Albitz & Liu. 8.1.10 Tips on Debugging Mail on an AS/400 System When mail is not being delivered as expected, a DNS/Mail administrator is faced with one of the most challenging troubleshooting areas in TCP/IP. 8.1.10.1 The Starting and Ending Place The first step in debugging mail is always knowing exactly what the users are using to address the mail to. If possible, visit the users at the client location and watch them type in the "Mail To" value: . Watch for mis-typing. Make sure the user is using the @ symbol and not using the word at. The second step is to find the SMTP User ID and the SMTP Domain name in the AS/400 system alias table on the AS/400 system for the POP client the mail should be delivered to. These two pieces of information are the starting and ending place for mail. Mail delivery starts by using the "Mail To:" information and ends by delivering the mail to the POP mailbox on the AS/400 system associated with the SMTP User ID and the SMTP Domain name. What the user types to the right of the @ sign in the "Mail To" should match the SMTP Domain name in the AS/400 SMTP system alias table for the POP3 user who should be receiving the mail with one exception: when aliases are used. For example, in Chapter 3.2.3, “Configuring AS1 as a Mail Server” on page 44, the scenario mail was addressed to: user@mycompany.com However, the AS/400 SMTP system alias table listed this user’s SMTP Domain name as AS1.mycompany.com. This discrepancy is OK and mail is successfully delivered because AS1’s local host table listed mycompany.com as an alias to AS1.mycompany.com and the Search First parameter in CFGTCP opt 12 is set to *LOCAL. 8.1.10.2 The POP3 Directory Entry The POP3 directory entry can be a source of confusion for an AS/400 administrator configuring POP3 for the first time. What makes a directory entry a POP3 directory entry? DNS Server Tips, Tools, and Problem Determination 203 The answer is: two parameters in the directory entry determine if the entry is a POP3 directory entry. They are: • Mail Service Level = 2 (System message store) • Preferred address = 3 (SMTP name) For an complete example of configuring a POP3 directory entry, see Chapter 3.2.3.1, “Configuring a POP3 User on AS1” on page 45. TIP: The POP directory entry needs to be configured on the AS/400 system that is the final resting place for the mail (until the user "Get’s the Mail"’). This is the AS/400 system that the POP3 client has its Incoming POP Server configured as. It is the AS/400 system where the POP3 client "GETS" mail. There is another kind of directory entry that can be used to forward mail. It is a different type of directory entry than the POP directory entry. It is explained in Appendix A.2, “Mail Forwarding” on page 433. TCP/IP Configuration Verify the SMTP client sending the mail and the POP client receiving the mail have TCP/IP connectivity to their respective servers. Also verify that each client can successfully ping their server by IP address. If the PING is not successful, you need to debug a TCP/IP connectivity problem before proceeding to debugging a mail problem: • Make sure the appropriate AS/400 line descriptions are active. • Verify the associated IP interface has been started on the AS/400 system. • Verify the TCP/IP route exists if the client is on another subnet from the SMTP, POP, or DNS server. If the mail client is configured to have the SMTP Outgoing Mail Server or Incoming Mail Server to be a host.domain name rather than an IP address, then verify that PING to the host name is successful. If PING by IP address works but PING by host name fails, you need to debug a DNS problem before proceeding to debugging a mail problem. 8.1.10.3 DNS Server Verify the DNS server is started and an active QTOBDNS job exists in QSYSWRK subsystem. Check its job log for errors. Verify the IP interfaces that the DNS server should be bound to are started including the Internet address listed on the same AS/400 system’s CFGTCP opt 12. If changes or corrections have been made to the DNS server, make sure the DNS server has been updated to pick up those changes. Use nslookup to verify the DNS server is responding with the answers you expect. For example, is the DNS server resolving the SMTP domain name used to the right of the @ symbol in the "Mail To’ address? If not, this can be a problem unless an alias is used in the AS/400 local host table and Searched First =*LOCAL is used (this alias technique was explained in Chapter 3.2.3.4, “Verifying the TCP/IP and SMTP Configuration on AS1” on page 50). 8.1.10.4 SMTP and POP Servers Verify that the SMTP and POP servers are active. If active, their corresponding jobs are listed as active jobs in the QSYSWRK subsystem. Use the following command: 204 AS/400 TCP/IP DNS and DHCP Support WRKACTJOB SBS(QSYSWRK) Page down. If the SMTP server is active, you should find four SMTP jobs named: QTSMTPBRCL QTSMTPBRSR QTSMTPCLNT QTSMTPSRVR If the POP server is active, locate one or more jobs with the names: QTPOP00622 QTPOP00635 QTPOP00681 Where the last five numbers in the POP job name can be any number. Also, even one QTPOPxxxxx job active indicates the POP server is active. If the preceding jobs do not exist under QSYSWRK subsystem, then start these servers with the following commands: STRTCPSVR SERVER(*SMTP) STRTCPSVR SERVER(*POP). If you issue the previous commands and still cannot find the associated active jobs in the QYSWRK subsystem, it is possible that these jobs are starting but ending before you can locate them. First, check for any errors in the user job log that issued the STRTCPSVR commands. If your own interactive job was used to issue the commands, review your own job log with the following command: DSPJOBLOG Press Enter followed by F10, and then page up to look for error messages. Also, if the SMTP or POP jobs are ending with an error, review their spooled job logs for error messages. These jobs run using the QTCP user profile; thus, to find the spooled job logs of the inactive jobs, use the following command: WRKSPLF QTCP Press Enter followed by F18 to go to the bottom of the list. The job name is usually displayed in the User Data field in the Work With Spooled files display. If the SMTP and POP jobs are active and mail is still not being delivered, always check the STMP and POP active job’s job logs for any error messages. Any error messages in these job logs can give you clues as to what is going wrong. TIP: If changes to the AS/400 TCP/IP domain or host table have been made with the CFGTCP command, opt 12 or opt 10, the SMTP server needs to be ended and started again to pick up the changes. DNS Server Tips, Tools, and Problem Determination 205 8.1.10.5 QMSF Job For mail to be successfully delivered on an AS/400 system, at least one QMSF job needs to be active under the QSYSWRK subsystem. This job should autostart when the QSYSWRK subsystem goes active. However, certain errors can cause the QMSF job to end; thus, if mail is not being delivered, one of the first things to check is to verify that QMSF is active. To do so, issue the following command: WRKACTJOB SBS(QSYSWRK) QMSF should be listed as an active job. If it is not listed, you can start the QMSF job by issuing the following command: STRMSF If you issue the STRMSF command and still cannot find QMSF as an active job under QSYSWRK, the job may be starting but ending right away with an error. If this is the case, the ended job’s job log should be reviewed for error messages. The QMSF job runs using the QMSF user profile; thus, to find the spooled file for the QMSF job log, issue the following command: WRKSPLF QMSF Use F18 to go to the bottom of the list. Many of these QMSF job log spooled files may be listed. Use the F11 key to display the date and time stamps of these jobs to help locate the one you are looking for. If the QMSF job is active and mail is still not being delivered, check the active QMSF job log for errors. 8.1.10.6 The IBM Firewall for the AS/400 If the IBM Firewall is involved in the network configuration and the mail should be flowing across the firewall, verify that the firewall is active with the following command: WRKCFGSTS *NWS If it is not active, you may vary it on with option 1 from the WRKCFGSTS display. Verify that the secure mail server is configured correctly on the firewall. If you have made changes to the AS/400 TCP/IP domain information using CFGTCP opt 12 and the Firewall’s network server description is configured to use this information, you must vary off and vary back on the firewall network server description to pick up the changes. To review how an IBM Firewall for AS/400 should be configured when an internal DNS server exists in the secure network, please refer to 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125. If mail inbound from the Internet is not making it to the secure mail server, you can check the mail queue on the firewall. If the mail makes it to the firewall but the firewall cannot relay it, the mail is left on the firewall in the mail queue. To check the mail queue, check: K:\firewall\mqueue\ 206 AS/400 TCP/IP DNS and DHCP Support If the mail is still on the firewall’s mail queue, its control file may contain useful information. The control file is the file that begins with a q (for example, qfRAA002.11). The associated data file begins with a d such as dfRAA002.11. You may also want to check the mail log located in: E:\mptn\etc\mail.log And you also may want to check the error file, which is a file that only exists if there is a mail problem. The error file is located in: E:\mptn\etc\sendmail.err For additional firewall problem determination including mail, please see the redbook AS/400 Internet Security: IBM Firewall for AS/400, SG24-2162-00. 8.1.10.7 The POP Mailbox on the AS/400 System When POP3 mail is successfully delivered on the AS/400 system, it is located in a "POP mailbox" on the AS/400 system until the POP3 user issues the "GET MAIL" command from the POP3 client. It is possible to review the contents of an AS/400 IFS directory to determine if a POP3 user has any mail distributions in the POP3 mailbox. This is useful when debugging a mail problem because an administrator does not have to continue to use the POP3 client and issue "GET MAIL" to see if mail is finally working but rather can check for mail with one "green screen" command, which is: WRKLNK '/QTCPTMM/MAIL/JONEST2' where JONEST2 in the command is the system directory User ID of the POP3 client; this may be different from their SMTP User Id. The JONEST2 User ID was used in an example in Figure 36 on page 46. The SMTP user ID used in the same example was tim as shown in Figure 37 on page 47. If the POP3 mailbox exists, the previous command shows the following display: Figure 190. Locating a POP3 Mailbox on the AS/400 System Work with Object Links Directory . . . . : /QTCPTMM/MAIL Type options, press Enter. 3=Copy 4=Remove 5=Next level 7=Rename 8=Display attributes 11=Change current directory ... Opt Object link Type Attribute Text JONEST2 DIR Bottom Parameters or command DNS Server Tips, Tools, and Problem Determination 207 NOTE: If the previous command is issued and the error message "object not found" is issued to the user’s job log, the POP3 mailbox does not exist. It is important to realize that the POP3 mailbox does not exist until the first distribution of mail is delivered to it. If the POP3 mailbox (in the form of the directory listed in Figure 190) is missing, it does not necessarily mean that the POP3 directory entry was misconfigured. It may just mean that mail has never been delivered to this mailbox yet. From the display in Figure 190, take option 5 to view the next level. The next level shows any mail distributions that exist in the POP3 mailbox. Figure 191 shows that the two mail distributions are located in JONEST2 POP3 mailbox. These distributions disappear after the POP3 user issues a "GET MAIL" from the POP3 client. You cannot read the contents of these mail distributions from an AS/400 "green screen". Figure 191. Mail Distributions Located in JONEST2 POP3 Mailbox on the AS/400 System 8.2 Problem Symptoms and Probable Causes As the authors of this redbook prepared their DNS configurations to match the scenarios explained in this book, they, of course, made common mistakes and the same as any other DNS administrators, had to troubleshoot their problems. These problem symptoms are documented in this section to give a headstart to other DNS administrators who may run into the same problems or mistakes when configuring their name servers. It is by all means not a complete list of all problems that can occur when configuring name servers. Problem symptom 1: Secondary server fails to load data for zone. You receive a message in the secondary server’s DNS job log (QTOBDNS in subsystem QSYSRWK), indicating that the server could not load data for zone. Example: Could not retrieve serial number for zone 62.5.10.in_addr.arpa Secondary DNS server could not load data for zone 62.5.10.in_addr.arpa Could not retrieve serial number for zone 62.5.10.in_addr.arpa Work with Object Links Directory . . . . : /QTCPTMM/MAIL/JONEST2 Type options, press Enter. 3=Copy 4=Remove 5=Next level 7=Rename 8=Display attributes 11=Change current directory ... Opt Object link Type Attribute Text JW122040.NOT STMF JW122735.NOT STMF Bottom 208 AS/400 TCP/IP DNS and DHCP Support Probable cause 1: You have made a typing mistake in the domain name. In our example, the domain name for the reverse mapping files includes an underscore character _ instead of a dash -. The correct name is 62.5.10.in-addr.arpa Probable cause 2: The primary server is not active. A zone transfer cannot take place on a secondary name server if the primary server it is trying to load from is inactive. Probable cause 3: The primary server is active but a security configuration on the primary server is preventing the zone transfer. Check the Properties’ security tab on the primary server’s DNS server and review the Secondary Server Access List. By explicitly listing one secondary server, you implicitly prevent any other secondary server from completing a zone transfer. Also check the Properties Security tab of the primary domain file that you are trying to zone transfer. Review the Domain data access for both subnets and IP addresses. By listing a subnet or an IP address here, you implicitly deny access to all other subnets and IP addresses. Problem Symptom 2: NSLOOKUP fails with error message: *** Can't find server name for address 10.5.69.222: Non-existent host/domain *** Default servers are not available Probable cause 2: There is no PTR record in the reverse lookup file for the DNS server. From the previous error message, the 69.5.10.in-addr.arpa primary domain needs to be checked to ensure that the host (in this case, AS1) the DNS server is running on is listed in this file. If it is not listed, add the host to the 69.5.10.10.in-addr.arpa file and then click on Update Server smart icon to refresh the name server with the configuration change. Retry the NSLOOKUP. Problem Symptom 3: NSLOOKUP fails to give a response to the query but displays the following text: > otherserver.otherdomain.mycompany.com. Server: as1.mycompany.com Address: 10.5.69.222 *** as1.mycompany.com can't find otherserver.otherdomain.mycompany.com.: No response from server Probable Cause 1: The query for OTHERSERVER posed to the DNS server AS1 is for information that AS1 is not authoritative for; thus, the AS1 name server must query the child server OTHERHOST, which should respond back to the parent server AS1, which should respond back to nslookup. In this case, the child DNS server OTHERHOST was not started. Thus, the AS1 could not get the answer to the query. Recovery: Start the child DNS server, OTHERHOST. DNS Server Tips, Tools, and Problem Determination 209 Probable Cause 2: Once the child DNS server OTHERHOST was started, the same nslookup query posed to AS1 resulted in the same error message. The QTOBDNS job log was reviewed on the OTHERHOST and it concluded that OTHERHOST was active and ready for queries. However, OTHERHOST, in this case, had more than one LAN adapter; thus, it had more than one IP address that the DNS server was listening on. AS1’s DNS server configuration indicated that OTHERHOST’s IP address was 10.1.1.2. When CFGTCP, option 1 was issued on OTHERHOST (check the TCP/IP interfaces), it discovered that the 10.1.1.2 IP interface was Inactive. A closer scrutiny of OTHERHOST’s QTOBDNS job log resulted in the discovery of error message: DNS00E9, Could not assign address to socket. Placing the cursor on this error message and pressing F1 for help shows the message detail, which specifies the address that is inactive: 10.1.1.2. Recovery: Start the 10.1.1.2 IP interface on OTHERHOST child name server. Then stop and start the DNS server. Problem Symptom 4: The DNS server starts but the QTOBDNS job log contains the error message DNS00E9. See Figure 192. Figure 192. QTOBDNS Job Log: Could Not Assign Address to Socket Error Probable Cause 1: The following error message appears: Could not assign address to socket This indicates that upon startup of the DNS server, the server discovered an IP interface that was inactive. This may or may not be a problem. Sometimes IP interfaces are configured and left inactive deliberately and, in that case, the error message could be normal. Recovery: Place the cursor on the error message and press F1 for Help. This shows the message details that indicate which IP interface was inactive. If you need the DNS server to respond to queries on this IP interface, the interface needs to be to started with the following command: CFGTCP Use option 1, position the cursor on the Inactive interface, and use option 9 to start the interface. Use the F5 key to refresh the display and, if necessary, F11 to display the status of the interface. If the interface continues to be inactive, check the status of the line with the following command: WRKCFGSTS *LIN >> CALL PGM(QDNS/QTOBDNS) PARM('-p' '53' '-d' '0' '-b' '/QIBM/UserData/OS400/ DNS/BOOT') DNS server starting. Could not assign address to socket. 210 AS/400 TCP/IP DNS and DHCP Support If the status of the line is FAILED, Varied Off, RCYPND, or RCYCNL, attempt to vary off the line and vary it back on. Once the line goes active (or vary on pending), go back to CFGTCP, option 1, and attempt to start the interface associated with that line description again. If you cannot get the line description to go to ACTIVE or VARY ON PND, you may have a hardware problem. Check for associated error messages in the AS/400 history log with the following command: DSPLOG Press F4 to prompt and enter the time range that you attempted to vary on the line. Problem Symptom 5: The DNS server starts but the QTOBDNS job log contains the error message: DNS000F: Host mycompany.com can only have CNAME data. Also, a secondary name server fails to zone transfer the primary domain file mycompany.com. The QTOBXFER job log on the secondary name server contains error message: DNS006B SOA zone information type, class, or time to live value not valid. The QTOBDNS job log on secondary name server contains error message: DNS00C6 Secondary DNS server could not load data for zone mycompany.com. Probable Cause 1: The key to this problem is the error message in the QTOBDNS job log on the primary name server. An alias of name of mycompany.com is being used somewhere in the mycompany.com.DB file on the left hand side of a resource record, which is not a CNAME resource record. The mycompany.com.DB file is located in the /QIBM/UserData/OS400/DNS directory. A review of the mycompany.com.DB file confirms this: mycompany.com is used on the left hand side of the NS record. Of course it is because mycompany.com is the name of the domain itself -- the NS record must be listed with mycompany.com to the left. The problem is not with the NS record. The problem is with the CNAME record. A domain name cannot be an alias name for a host. This CNAME record must be deleted and the server needs to be then updated (that is, configuration refreshed). Problem Symptom 6: The secondary name server’s QTOBDNS job log contains some messages that are confusing: If you start an interface after the DNS server is started, it will take some time before the DNS server answers queries on the newly started interface. If you want the name server to answer queries on this interface immediately, run the Update Server function. Tip DNS Server Tips, Tools, and Problem Determination 211 secondary zone mycompany.com (serial number 887939256) loaded successfully. But then a few messages later in the same job log, the following messages are logged: Ready to answer queries. Secondary DNS server could not load data for zone mycompany.com. Also, the primary server QTOBDNS job log contains the message: primary zone mycompany.com (serial number 888511902) loaded successfully. The preceding messages seem to indicate a secondary name server starting successfully from its backup files and failing when it tries to contact the primary master file to check serial numbers. Note that the two serial numbers of the two messages are not the same. The secondary name server’s job was to stay in synch with the primary name server and it does that by checking the serial number of the domain. Thus, why is the secondary name server running with a serial number for mycompany.com of 887939256, yet the primary name server is running with a serial number of 888511902? Probable Cause 1: The messages in the secondary name server indicate that there was a problem. The secondary name server could not complete a zone transfer from the primary name server. The cause was the problem explained in the probable cause to problem symptom 5. However, let’s discuss the messages further. When the secondary domain of mycompany.com was configured on the secondary name server, the box labeled Save Copies of Master Server Data was checked. This means that when the secondary name server completes its first zone transfer of the domain file of mycompany.com, it backs up this file on the AS/400 system that it is running on. The next time the secondary name server is started, it is booted with the backup copy of mycompany.com (in this case, serial number 887939256). Then the secondary name server checks the serial number of the primary domain file on the primary name server. If the two serial numbers are different (as in this case), the secondary name server attempts a zone transfer. In this case, the zone transfer failed; thus, the secondary name server is up and running and is capable of answering queries for domain mycompany.com but it is running on a downlevel version of the domain file of mycompany.com. Probable Cause 2: The zone transfer failed because the primary name server is not started. Probable Cause 3: The zone transfer failed because the primary name server is started but the IP interface on the primary name server is inactive. Recovery: check the configuration on the secondary name server to determine which primary name server IP address it is attempting to zone transfer from. On the primary name server, verify that this IP interface is active with the CFGTCP command followed by option 1. 212 AS/400 TCP/IP DNS and DHCP Support Probable Cause 4: The zone transfer failed because a security configuration on the primary name server is preventing the zone transfer from this particular secondary name server. See Chapter 3.2.7, “Primary Name Server Security Considerations” on page 63, for more information on how security is configured on the primary name server. Probable Cause 5: There is a typing mistake in the name of the secondary name server’s domain name that the zone transfer is trying to take place against. See Problem Symptom 1. Problem Symptom 7: When reviewing the mycompany.com. database primary forward mapping file using Operations Navigator File Systems and Netscape, one of the resource records (for example, A record) has a host name listed of: host1.mycompany.com.mycompany.com. The mycompany.com domain listed twice is incorrect. And when using Operations Navigator’s DNS configuration to display the mycompany.com primary domain file, the same problem is shown again. The host host1 is listed as: Figure 193. Misconfigured Host in mycompany.com Primary Domain File The mycompany.com domain is listed twice, which is incorrect. Where did the second mycompany.com come from? Probable Cause 1: When using Operations Navigator DNS configuration to add a new host to the primary forward mapping file, the domain name was typed in and the trailing period after the com was inadvertently left off. When the DNS configuration sees a domain without a period at the end, it "tries to help" by adding the domain to end of what was typed. Recovery: Use the Operations Navigator DNS configuration to delete the host host1 and then add a new host: host1.mycompany.com. This time, make sure the trailing period is typed in after com. Problem Symptom 8: The Operations Navigator DNS configuration displays were used to create new primary domains and the GUI displays them correctly. However, after starting the DNS server, the QTOBDNS job log does not contain any messages that confirm that these primary domain files were successfully loaded. But the job log does not contain any error messages related to loading these files either. DNS Server Tips, Tools, and Problem Determination 213 Probable Cause 1: If primary domain files are not being loaded by the DNS server, there is either a problem with them that should cause an error message to be posted in the QTOBDNS job log or these primary domain files are disabled. When creating primary domains with Operations Navigator DNS configuration, they are initially disabled by default. The DNS administrator must enable each primary domain when the configuration is finished. To enable a primary domain, right-click on the primary domain and click on enable. Then click on the smart icon "update server" to refresh the DNS server’s configuration. 8.3 For Additional Help With Problems AS/400 Specific DNS Problems or Questions AS/400 Software Service and Support in your respective country can help you with questions or problems with the AS/400 DNS server provided you have a Support Line contract and your question is not one of a consulting nature. DNS Questions Not Specific to the AS/400 Implementation There is a DNS news group which is accessible over the Internet that can provide answers to DNS questions not specific to the AS/400’s implementation of the DNS server. The newsgroup is: comp.protocols.dns.bind It can be located from the URL www.dns.net/dnsrd. (from here, click on newsgroups). Or from the URL www.dejanews.com, use the Find option to find comp.protocols.dns.bind. 214 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 215 Part 2. AS/400 DHCP Server Support The Dynamic Host Configuration Protocol (DHCP) provides configuration parameters to TCP/IP hosts. It is a client/server protocol that centrally controls and delivers configuration parameters to dynamically configured clients. Part 2 of this book provides an overview of DHCP concepts and describes how DHCP is implemented in the AS/400 system through case studies. 216 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 217 Chapter 9. DHCP Concepts and Overview Dynamic Host Configuration Protocol, or DHCP, is a client/server protocol that enables you to centrally locate and dynamically distribute configuration information, including IP addresses. This chapter provides an overview of DHCP concepts and components. The intention is to summarize concepts that you need to implement DHCP on the AS/400 system. For more information on this subject, refer to TCP/IP Configuration and Reference, SC41-5420-01. 9.1 BOOTP, the Predecessor of DHCP DHCP and its predecessor, Bootstrap Protocol (BOOTP), came about to fulfill the need of diskless workstations to acquire IP addresses and bootstrap information from a server in the network. BOOTP is an example of how you use the client/server paradigm to bootstrap a diskless workstation and to provide it with IP address configuration. The BOOTP server listens on well-known port 67, and diskless computers usually contain a start-up program in non-volatile storage, or ROM. Because all the workstations start from the same program, it is impossible to store IP addresses in that code. A diskless machine needs to know its IP address to participate in a TCP/IP network. It also needs to know the address of the file server machine where the bootstrap image is stored. The BOOTP client uses the special broadcast IP address of all ones (255.255.255.255) to obtain its IP address. It is responsible for retransmitting requests if the server does not respond. The mapping between the client hardware address and the IP address is kept in the BOOTP table, which is manually maintained by the administrator. BOOTP is the first step of a two-step bootstrap procedure. It does not provide the clients with a memory image. Instead, it provides the client only with the information that it needs to obtain an image. The client obtains this memory image after initiating a Trivial FTP (TFTP) request to the server, whose IP address it received from the BOOTP server. Up to V4R1, BOOTP and TFTP were the two protocols that supported the IBM Network Station on the AS/400 system. Figure 194 on page 218 shows the BOOTP flow between a client and a server. When the server receives a BOOTP request from a client, the server looks up the defined IP address based upon the client’s MAC address. It then replies with the client IP address and the name of the load file. The client initiates a TFTP request to the server for the load file. 218 AS/400 TCP/IP DNS and DHCP Support Figure 194. BOOTP Flow between Client and Server BOOTP uses a limited broadcast address for the BOOTP request. It requires the server in the same subnet as the client that requests configuration information. BOOTP forwarding is a mechanism for routers to forward a BOOTP request between subnets. The agents that forward the BOOTP packets between clients and servers on different subnets are called Relay Agents. DHCP adds the capability of automatically allocating reusable network addresses and distributing additional host configuration options. DHCP clients and servers use existing BOOTP Relay Agents. BOOTP clients can interact with DHCP servers and DHCP servers and BOOTP servers can coexist if configured properly. DHCP clients cannot interoperate with BOOTP servers. The AS/400 DHCP server support in V4R2 accommodates the already existing BOOTP server that was available in earlier releases of OS/400. This AS/400 DHCP server support also accommodates BOOTP clients. Additionally, it performs all of the functions specific to BOOTP as well as all of the added functionality that a DHCP server is assumed to carry. BOOTP and DHCP servers cannot run at the same time on the same system because both use the well-known ports 67 and 68. IETF RFCs 2131 and 2132 describe DHCP protocols. 9.2 DHCP Overview DHCP provides a framework for passing configuration parameters to hosts on a TCP/IP network. The following three types of network components make up a DHCP network. • DHCP host clients. These hosts run the DHCP client programs. The DHCP clients work together with their server counterparts to obtain and implement configuration information to automatically access IP networks. Examples of DHCP clients are the IBM Network Station and the DHCP client support that is Broadcast MAC Address IPaddress and BOOT path Client initiates TFTP request to the server to load file BOOTP Look up table Client MAC add IP Addr BOOT Path Client1 0x12 DHCP Concepts and Overview 219 included in TCP/IP for Windows 95. The AS/400 system cannot be a DHCP client. • DHCP Servers. DHCP servers provide the addresses and configuration information to DHCP and BOOTP clients on the network. DHCP servers contain information about the network configuration and host operational parameters, as specified by the network administrator. The AS/400 system in V4R2 can be a DHCP server. • BOOTP/DHCP Relay Agent. Relay Agents (also called BOOTP helpers) are used in IP router products to forward information between DHCP clients and servers on different subnets. BOOTP/DHCP Relay Agents eliminate the need for a DHCP server on each subnet to service the broadcast requests from DHCP clients. The AS/400 system can be a BOOTP/DHCP Relay Agent. Figure 195 shows the different components in a DHCP network. Figure 195. Components in a DHCP Network 9.3 How does DHCP Work? DHCP allows clients to obtain IP network configuration, including an IP address, from a central DHCP server. DHCP servers control whether the addresses they provide to clients are allocated permanently or leased for a specific period of time. When the server allocates a leased address, the client must periodically check with the server to re-validate the address and renew the lease. The DHCP client and server programs handle address allocation, leasing, and lease renewal. All of these processes are transparent to end users. To further explain how DHCP works, this section answers the following questions: • How is configuration information acquired? Router DHCP Server AS1 Subnet 1 Subnet 3 Subnet 2 C2 C3 C1 BOOTP/DHCP Relay Agent Always relays to both DHCP servers BOOTP/DHCP relay agent R2 Always relays to AS5 DHCP Server AS2 AS5 DHCP Client DHCP Client DHCP Client 220 AS/400 TCP/IP DNS and DHCP Support • How are leases renewed? • What happens when a client moves out of the network? • How are changes implemented in the network? • What are BOOTP/DHCP Relay Agents? 9.3.1 How is Configuration Information Acquired? DHCP allows DHCP clients to obtain an IP address and other configuration information through a request process to a DHCP server. DHCP clients use RFC-architected messages to accept and use the options served them by the DHCP server. Figure 196 shows a high-level overview of the DHCP protocol cycle. Figure 196. DHCP Cycle Overview For example: 1. The client broadcasts a message that contains its client ID and announces its presence. The message also requests an IP address (DHCPDISCOVER message) and desired options, such as subnet mask, domain name server, domain name, and static route. See Figure 197. DHCPOffer DHCPDiscover DHCPRequest DHCPAck 1 2 3 End Client sends Server Makes Client Sends Server sends DHCP Cycle DHCP Concepts and Overview 221 Figure 197. 1- DHCP Client Broadcasts DHCPDISCOVER on its Subnet Note: If you configure routers on the network to forward DHCP and BOOTP messages (using BOOTP/DHCP Relay Agent capabilities), the broadcast message is forwarded to DHCP servers on the attached networks. 2. Each DHCP server that receives the client's DHCPDISCOVER message can send a DHCPOFFER message to the client offering an IP address. If the address has not been previously assigned, the DHCP server checks that the address is not already in use on the network before issuing an offer. The server checks the configuration file to see if it needs to assign a static or dynamic address to this client. In the case of a dynamic address, the server selects an address from the address pool, choosing the least recently used address. An address pool is a range of IP addresses that are leased to clients. In the case of a static address, the server uses a client statement from the DHCP server configuration file to assign an static address to the client. Upon making the offer, the AS/400 DHCP server reserves the offered address. See Figure 198. Figure 198. All DHCP in the Subnet Send DHCPOFFER DHCP Server Listens on Port 67 Broadcast MAC address on the network DHCPDiscover 222 AS/400 TCP/IP DNS and DHCP Support 3. The client receives the offer messages and selects the server it wants to use. Upon receiving an offer, some DHCP clients have the capability to make note of how many requested options are included in the offer. The DHCP client continues to receive offers from DHCP servers for a period of time after the first offer is received. The client takes note of how many requested options are included in each offer. At the end of that time, the DHCP client compares all offers and selects the one that meets its criteria. Note: Not all DHCP clients have the capability to wait and evaluate the offers that they receive. Many DHCP clients on the market today accept the first offer that arrives. 4. The client broadcasts a message indicating which server it selected and requesting the use of the IP address that server offers (DHCPREQUEST message). See Figure 199. Figure 199. DHCP Client Accepts DHCPOFFER from Server 1 5. If a server receives a DHCPREQUEST message indicating that the client has accepted the server's offer, the server marks the address as leased. If the server receives a DHCPREQUEST message indicating that the client has accepted an offer from a different server, the server returns the address to the available pool. If no message is received within a specified time, the server returns the address to the available pool. The selected server sends an acknowledgment that contains additional configuration information to the client (DHCPACK message). See Figure 200. Broadcast a DHCPRequest to acquire the offered address by server1 Server-2 Server-3 Mark the address as leased Return the address to the address DHCP Concepts and Overview 223 Figure 200. Selected Server Sends Acknowledgment with Additional Configuration to Client 6. The client determines whether the configuration information is valid. Accepting a valid lease, the client specifies a BINDING state with the DHCP server and proceeds to use the IP address and options. To DHCP clients that request options, the DHCP server typically provides options that include subnet mask, domain name server, domain name, static route, class-identifier (which indicates a particular vendor), and user class. A DHCP client can request its own, unique set of options. For example, Windows NT 3.5.1 DHCP clients are required to request options. The default set of client-requested DHCP options that IBM provides includes subnet mask, domain name server, domain name, and static route. 9.3.2 How are Leases Renewed? The DHCP client keeps track of how much time is remaining on the lease. At a specified time prior to the expiration of the lease (usually when half of the lease time has passed), the client sends a renewal request to the leasing server. This request contains its current address and configuration information. If the server responds with a DHCPACK, the DHCP client's lease is renewed. If the DHCP server explicitly refuses the request, the DHCP client continues to use the IP address until the lease time expires. At this time, the client initiates the address request process, including broadcasting the address request. If the server is unreachable, the client continues to use the assigned address until the lease expires (see Figure 201). DHCPACK plus additional configuration 224 AS/400 TCP/IP DNS and DHCP Support Figure 201. How are Leases Renewed? 9.3.3 What Happens when a Client Moves out of its Subnet? DHCP provides a client host with the freedom to move from one subnet to another without having to know what IP configuration information it needs on the new subnet. As long as the subnets to which a host relocates have access to a DHCP server, a DHCP client automatically configures itself to access those subnets correctly. For DHCP clients to reconfigure and access a new subnet, the client host must be re-booted. When a host restarts on a new subnet, the DHCP client tries to renew its old lease with the DHCP server that originally allocated the address. The server refuses to renew the request because the address is not valid on the new subnet. The client then initiates the IP address request process to obtain a new IP address and access the network. 9.3.4 How are Changes Implemented in the Network? With DHCP, you make changes at the server, re-initialize the server, and distribute the changes to all the appropriate clients. A DHCP client retains DHCP option values that are assigned by the DHCP server for the duration of the lease. If you implement configuration changes at the server while a client is already up and running, the DHCP client does not process those changes until it either attempts to renew its lease or is restarted. 9.3.5 What are BOOTP/DHCP Relay Agents? The function of a Relay Agent is to forward any BOOTP/DHCP requests that it receives on its subnet or from other subnets in the direction of the DHCP server. The mechanism of operation of a Relay Agent is as follows: 1. The Relay Agent knows the address of the DHCP server beforehand, and it knows where to forward the requests for that server. The Relay Agent can, therefore, be a router that receives and forwards requests. 2. The DHCP client creates a packet with a special field called RELAY AGENT. Initially, the client places all zeros in it. The Relay Agent recognizes that the Lease Renewal Request (IP address + Config Info) Listens on Port 67 for additional requests 1 12 2 3 4 5 6 7 8 9 10 11 Half of lease time elapsed? OOPs!! Time out Start DHCP Cycle Again!!! DHCP Concepts and Overview 225 RELAY AGENT field is all zeros and puts its own IP address in this field. It then pushes the packet into the next subnet and increments the hop count. 3. The next Relay Agent, if any, sees that the RELAY AGENT field in the packet is not all zeros, forwards the packet to the next server, and increments the hop count by one. This process is repeated until the packet reaches the DHCP server. 4. The DHCP server sends the DHCPOFFER back to the first Relay Agent and the Relay Agent forwards it to the originator client that broadcasted the DHCPDISCOVER. Once the client receives an IP address, the communication is direct between server and client. The AS/400 system on V4R2 can function as either a DHCP server or as a BOOTP/DHCP Relay Agent. 226 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 227 Chapter 10. AS/400 DHCP Server Implementation This chapter describes the implementation of the AS/400 DHCP sever. 10.1 DHCP Software Prerequisites Native DHCP support on AS/400 in V4R2 requires the following products: • 5769-SS1 OS/400 V4R2 option 3 • 5763-XD1 V3R1M3 -- Client Access for Windows 95/NT 10.2 DHCP Installation Installing DHCP support on your AS/400 V4R2 system involves installing 5769-SS1 OS/400 V4R2 option 3 and Client Access for Windows 95/NT (5763-XD1 V3R1M3) in your administrator’s workstation. The installation program performs the following tasks: • Creates an IFS subdirectories: /QIBM/UserData/OS400/DHCP. • Sets up the IFS files required for DHCP in the preceding directory. If any file already exists, it remains "as is". After the installation, proceed with the DHCP server configuration using Operations Navigator. Figure 202 provides an overview of AS/400 DHCP server installation and configuration. To reset an existing configuration and start over, perform the following steps: 1. Delete the IFS file dhcpsd.cfg file in /QIBM/UserData/OS400/DHCP. 2. CALL QSYSDIR/QTODDINS from an AS/400 command entry display. This program creates a blank configuration file that the Operations Navigator’s GUI can edit. You can also perform these steps if you suspect some DHCP files are corrupted. Reinstalling 5769-SS1 option 3 does not replace existing files. Tip 228 AS/400 TCP/IP DNS and DHCP Support Figure 202. AS/400 DHCP Server Support Installation and Configuration Overview 10.3 DHCP Server Jobs The DHCP server jobs run in the QSYSWRK subsystem. They are as follows: • QTODDHCPS. This is the DHCP server program that runs when the DHCP Mode attribute is *SERVER. The AS/400 system that runs this program functions as a regular, DHCP transaction-processing server. The DHCP server uses well-known ports 67 and 68. DHCP server messages are directed to the job log. Use the Work with Spooled File (WRKSPLF) command for User QTCP to browse the DHCP server job log. This job starts with job description QTODDJDS. • QTODDHCPR. This is the DHCP server job that runs when the DHCP Mode attribute is *RELAY. The AS/400 system running this job runs as a BOOTP/DHCP Relay Agent. The BOOTP/DHCP Relay Agent runs on well-know port 67. This job starts with the job description QTODDJDR. 10.4 DHCP Configuration Files The files that DHCP requires are in the IFS directory /QIBM/UserData/OS400/DHCP. These files are as follows: • dhcpsd.cfg. This is the configuration file that DHCP reads when it runs as a regular DHCP server (transaction processing server). • dhcprd.cfg. This is the configuration file that DHCP reads when it runs as a BOOTP/DHCP Relay Agent server. • dhcps.ar. DHCP server non-volatile address records. dhcp.atrib dhcpsd.cfg dhcprd.cfg Install 5769SS1 Opt.3 OS/400 - Extended Base Directory Support 1 QSYS library QTODDSVR (*PGM) QTODDRLY (*PGM) QTODDPSA (*SRVPGM) QTODDJOB (*PGM) QTODDB2D (*PGM) QTODDTWX (*PGM) QTODDCPP (*PGM) QTODDPOP (*PGM) QTODDCNV (*PGM) QTODDXAD (*PGM) QTODDXRM (*PGM) CHGDHCPA (*CMD) QTODDJDS (*JOBD) QTODDJDR (*JOBD) QSYSDIR/QTODDINS 2 Create DHCP Configuration dhcps.ar dhcps.cr dhcps.ar1 dhcps.cr1 dhcpsd.log ( DHCP Server Mode) dhcprd.log (DHCP Relay Mode) 3 Start DHCP (STRTCPSVR) AS/400 DHCP Server Implementation 229 This file contains up-to-the minute, actual address allocation from the address pools that the DHCP server administers when running in regular DHCP server mode. • dhcps.cr. DHCP server non-volatile client records: This file contains up-to-the minute data on the actual clients that this DHCP server is servicing when running in regular DHCP server mode. • dhcps.ar1. DHCP server backup of non-volatile address records: The DHCP server takes an hourly backup of dhcps.ar, the non-volatile address record file. • dhcps.cr1. DHCP backup of server non-volatile client records: The DHCP server takes an hourly backup of dhcps.cr, the non-volatile client records file. • dhcp.attrib. DHCP attributes file: Stores the current value of the CHGDHCPA command parameters, with the exception of the AUTOSTART parameter. 10.4.1 Log Files The following files in the IFS directory /QIBM/UserData/OS400/DHCP are used to log DHCP server activity. They are also used for problem determination: • dhcpsd.log. DHCP uses this file as the default logging/tracing file it runs as a regular DHCP server. You can enable logging through a configuration option in Operations Navigator, and you can configure this file to roll into multiple files based on the maximum size. To enable DHCP logging, select the Logging tab in the DHCP Server Properties. Specify the type of logging that you want to perform, depending on the types of things that you want to log. Typically, you perform either minimal logging or no logging at all. Figure 203 shows how to enable logging on the AS/400 DHCP server. 230 AS/400 TCP/IP DNS and DHCP Support s Figure 203. Configuring DHCP Server Logging -- DHCPSD.LOG • dhcprd.log. DHCP uses this file as the default logging/tracing file when it runs as a BOOTP/DHCP Relay Agent. You can enable logging through a configuration option in Operations Navigator, and you can configure this file to roll into multiple files based on the maximum size. Figure 204 provides an overview of the DHCP server jobs, files, and logs. AS/400 DHCP Server Implementation 231 Figure 204. DHCP Server Jobs, Files, and Logs Figure 205 provides an overview of the BOOTP/DHCP Relay Agent jobs, files, and logs. Work with Active Jobs AS02 02/25/98 14:30:51 CPU %: 11.0 Elapsed time: 00:01:33 Active jobs: 199 Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message 8=Work with spooled files 13=Disconnect ... Opt Subsystem/Job User Type CPU % Function Status QSYSWRK QSYS SBS .0 DEQW QTGTELNETS QTCP BCH .2 DEQA QTLPD00315 QTCP BCH .0 DEQW QTMSNMP QTCP BCH .0 PGM-QTOSMAIN DEQW QTMSNMPRCV QTCP BCH .0 PGM-QTOSRCVR TIMW QTNSMINV QTCP BCH .0 PGM-QYTCSNC1 DEQW QTODDHCPS QTCP BCH .2 PGM-QTODDSVR SELW or STRTCPSVR *DHCP GKFS1DWWULE *SERVER QTODDJOB QTODDHCPS SBMJOB dhcps.ar dhcps.cr dhcpsd.log /QIBM/UserData/OS400/DHCP Backup dhcps.ar1 dhcps.cr1 232 AS/400 TCP/IP DNS and DHCP Support Figure 205. BOOTP/DHCP Relay Agent Jobs, Files, and Logs 10.5 DHCP Server User Interface This section describes the user interface that is available in the AS/400 DHCP server. 10.5.1 DHCP Server Configuration through Operations Navigator Install and configure the AS/400 DHCP server is through Operations Navigator. Operations Navigator provides the one and only configuration interface for the DHCP server. The Operations Navigator DHCP Configuration Wizard provides a simple process for quickly configuring and starting initial DHCP server. To start the DHCP server configuration from Operations Navigator, select AS400sytem name->Network->Server->OS400. The window shown in Figure 206 is displayed. Work with Active Jobs AS02 02/25/98 15:30:51 CPU %: 11.0 Elapsed time: 00:61:33 Active jobs: 199 Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message 8=Work with spooled files 13=Disconnect ... Opt Subsystem/Job User Type CPU % Function Status QSYSWRK QSYS SBS .0 DEQW QTGTELNETS QTCP BCH .2 DEQA QTLPD00315 QTCP BCH .0 DEQW QTMSNMP QTCP BCH .0 PGM-QTOSMAIN DEQW QTMSNMPRCV QTCP BCH .0 PGM-QTOSRCVR TIMW QTNSMINV QTCP BCH .0 PGM-QYTCSNC1 DEQW QTODDHCPR QTCP BCH .2 PGM-QTODDRLY SELW or STRTCPSVR *DHCP GKFS1DWWULE *RELAY QTODDJOB QTODDHCPR SBMJOB dhcprd.log AS/400 DHCP Server Implementation 233 Figure 206. DHCP Configuration Using Operations Navigator To use Operations Navigator, you need to install Client Access/400 for Windows 95/NT V3R1M3 in your administrator’s PC. Host servers must be started on your AS/400 system. Use the Start Host Server (STRHOSTSVR) command to start it. 10.5.2 Change DHCP Attributes Command (CHGDHCPA) Use the Change DHCP Attributes (CHGDHCPA) command to set the AUTOSTART attribute, which determines whether or not the DHCP server starts automatically when TCP/IP is started using the STRTCP command. This attribute is ignored by the STRTCPSVR command. STRTCPSVR *DHCP starts the DHCP server regardless of the value of the AUTOSTART attribute. You set this attribute from the Operations Navigator interface as well. Use the CHGDHCPA command to set the MODE attribute that determines the DHCP server behavior. Set the MODE attribute to *SERVER if you want the DHCP server to automatically assign reusable IP addresses to DHCP clients in response to DHCP requests. Set the MODE attribute to *RELAY if you want the DHCP server to function only as a BOOTP/DHCP Relay Agent. A BOOTP/DHCP Relay Agent forwards BOOTP or DHCP packets from hosts to active BOOTP or DHCP servers and from the servers back to the hosts. It performs no BOOTP or DHCP server functions. The attributes file /QIBM/UserData/OS400/DHCP/dhcp.attrib is updated with the values that you specify in the CHGDHCPA command. 10.5.3 Start TCP Server *DHCP Use the STRTCPSVR SERVER(*DHCP) command to boot the DHCP server and the ENDTCPSVR SERVER(*DHCP) command to stop it. You can perform this function through Operations Navigator as well. 234 AS/400 TCP/IP DNS and DHCP Support 10.6 BOOTP-to-DHCP Migration Program This migration program, QSYS/QTODDB2D, is called by Operations Navigator. If the program detects that there is a BOOTP table in the system, it gives you options through the Operations Navigator to migrate the BOOTP table to a DHCP server configuration. Figure 207 shows the BOOTP migration window that the DHCP configuration wizard presents. Figure 207. Migrate BOOTP 10.7 DHCP Server Exit Programs The AS/400 DHCP server assigns and releases TCP/IP addresses from client hosts in a network. Exit points have been provided so that user-written programs are called from the running DHCP server. They allow for customer-supplied security validation of incoming client requests as well as for notification when an IP address is assigned or release. The exit programs and their functions are as follows: • DHCP address Binding Notification exit program. This program allows for notification each time the DHCP server assigns an IP address to a specific host. • DHCP Address Release Notification exit program. This program allows for notification each time the DHCP server releases an IP address from its specific client host assignment binding. • DHCP Request Packet Validation exit program. This program provides additional control for restricting which incoming DHCP and BOOTP message request packets from client hosts are processed and which are rejected by their DHCP server. AS/400 DHCP Server Implementation 235 Figure 208. DHCP Server Exit Programs Refer to System API Programming, SC41-5800 for information on how to use the DHCP exit programs. 10.8 DHCP Server Backup and Recovery Considerations You need to back up the following files on a regular basis and as part of your normal backup procedures: • Back up the following files if you are running a DHCP Server: • /QIBM/UserData/OS400/DHCP/dhcpsd.cfg • /QIBM/UserData/OS400/DHCP/dhcps.ar • /QIBM/UserData/OS400/DHCP/dhcps.cr • /QIBM/UserData/OS400/DHCP/dhcps.ar1 • /QIBM/UserData/OS400/DHCP/dhcps.cr1 • Back up the following files if you are running a BOOTP/DHCP Relay Agent: • /QIBM/UserData/OS400/DHCP/dhcprd.cfg • Back up /QIBM/UserData/OS400/DHCP/dhcp.attrib to back up the general DHCP attributes. Note: Shut down the servers before you take these backups. This avoids taking the backup while one or more files are in the middle of an update. Optionally, you can save everything in the IFS directory /QIBM/UserData/OS400/DHCP/*.*. In this case, your backup includes other files that exist in this directory, such as log files. The other files are not required for recovery, but this might be an easier approach to avoid remembering to back up individual files. Perform the previous backups using the SAV command. When the files are restored, they automatically retain their ownership, CCSID, and authorizations that are required. If you use the CPY command, the resulting copies might end up with ownership and authorizations based upon the User IDs that issue the copy command, as opposed to the original ones. QIBM_QTOD_DHCP_REQ QIBM_QTOB_DHCP_ABND QIBM_QTOB_DHCP_ARLS Request Packet Validation Address Binding Notification Address Release Notification DHCP Sever DHCP DISCOVER Exit Point BOOTP Request DHCP ACK DHCP OFFER DHCP REQUEST 1 2 3 4 DHCPRELEASE Request Request Packet Request Type Client IP address Client Identifier Lease Duration Response Packet Reason Client IP address Client Identifier Exit PGM 236 AS/400 TCP/IP DNS and DHCP Support To recover DHCP files saved using the SAV command, use following guidelines: • Use the RST command to restore. • If you restore all of the previous files in the three categories (server, relay, and attributes) and the problem requiring the restore did not actually affect all three, you can wipe out changes made to others since the backup was taken. Carefully consider what it is that you truly need to restore. • If you want to restore only the DHCP server, we recommend that you restore all of the files that are listed in that group. There might be instances where you want to restore only the DHCP server configuration file but not the non-volatile state files, or vice versa. If you are restoring the non-volatile state files, you must restore them in a synchronous group, such as (/QIBM/UserData/OS400/DHCP/dhcps.ar and /QIBM/UserData/OS400/DHCP/dhcps.cr) - or- (/QIBM/UserData/OS400/DHCP/dhcps.ar1 and /QIBM/UserData/OS400/DHCP/dhcps.cr1). Note: You must shut down the servers prior to restoring any file. The following backups take place automatically during the normal operation of the DIP server: • After every transaction processed, the server stores its current state in the following non-volatile files: /QIBM/UserData/OS400/DHCP/dhcps.ar and /QIBM/UserData/OS400/DHCP/dhcps.cr Hourly backups of the previous, non-volatile state files are taken in: /QIBM/UserData/OS400/DHCP/dhcps.ar1 and /QIBM/UserData/OS400/DHCP/dhcps.cr1. The following run-time recoveries take place automatically: • If the DHCP server is shut down intentionally or terminates abnormally, you need to start the server again. You must also have it re-initialize itself to the state it was in just after it processed its last successful transaction. It does this by reading the /QIBM/UserData/OS400/DHCP/dhcps.ar and the /QIBM/UserData/OS400/DHCP/dhcps.cr files. • If the previous re-initialization fails due to the corruption of one or both of the primary non-volatile files, the DHCP server automatically deletes them. It then renames the hourly backup versions to the primary version file names and tries again. It sends messages to the log to signal this event. • If both of your re-initialization attempts fail, you need to recover using your own backups. © Copyright IBM Corp. 1998 237 Chapter 11. Start Here: Implementing DHCP in a Simple Network This chapter shows how to implement a DHCP server on your AS/400 system. It takes you through the detailed steps of setting up your AS/400 system so you can connect to your LAN and configure a DHCP server. It also describes how to configure both a Windows 95 client and the IBM Network Station as DHCP clients. 11.1 Scenario Overview This scenario sets up the AS/400 system to act as a DHCP server in a simple TCP/IP network. It also installs two different DHCP clients that request TCP/IP addresses from the AS/400 DHCP server. Further, it demonstrates the DHCP protocol flow between the server and the client. This scenario assumes that there is no existing logical network. Therefore, it uses a simple TCP/IP addressing scheme. It also assumes that the local area network is physically complete (all systems and clients are cabled to the network and can attach). 11.1.1 Scenario Objectives This scenario has the following three objectives: 1. To demonstrate the ease with which you can configure a simple TCP/IP network for DHCP using OS/400 support. 2. To demonstrate how to set up a Windows 95 client and an IBM Network Station to act as a DHCP client and have their TCP/IP addresses served to them from the AS/400 DHCP server. 3. To show the protocol flow between the server and the client. This flow is helpful to understanding how DHCP works and can be useful in problem determination. Figure 209. DHCP Client and DHCP Server Protocol Flow 11.1.2 Scenario Advantages This scenario has the advantage of being simple and showing the ease with which you can set up your AS/400 system to act as a DHCP server. The same simplicity also applies to the client setup. As1.mycompany.com DHCP Server DHCPREQUEST of that information DHCPOFFER of Network information DHCPDISCOVER packet DHCPACK acknowledgement DHCP client 238 AS/400 TCP/IP DNS and DHCP Support 11.1.3 Scenario Disadvantages It is assumed in this scenario that this is a new network. Therefore, you are free to choose any possible TCP/IP addressing scheme. This scenario does not show the complexities that arise with an existing network and hardcoded TCP/IP addresses. It also does not discuss the possible migration from BOOTP or deal with complex subnetting issues. You can consider the DHCP server in this example a single point of failure because it has no backup. Clients that have already queried the DHCP server for a network address remain connected if the DHCP server fails. New clients attempting to connect are unable to gain a TCP/IP address. 11.1.4 Scenario Network Configuration The following figure depicts the logical topology for this scenario: Figure 210. Simple Example Network The following scenario characteristics influence the DHCP configuration: • There is a single AS/400 DHCP server with a class A addressing scheme with a single subnet. • A subnet mask allows the AS/400 DHCP server to service 253 clients. The AS/400 host address remains constant and is removed from the addressing pool. • Routers and bridges do not exist within this network. 11.1.5 Network Addressing Scope Planning The network 10.1.1.0 is used in this example. It is highly recommended to do a hierarchical partitioning of a network to ease administration. You can accomplish this as follows: 10.x.y.z where x = site or region, y = department, z = hosts, x + y = subnet. The small example network (10.1.1.0 and a mask of 255.255.255.0) allows up to 254 hosts. If you need to expand the network to connect with more hosts, change Configure me as a DHCP Server ? PC Client .2 10.1.1.0 255.255.255.0 .? .? AS1 DHCP Server Start Here: Implementing DHCP in a Simple Network 239 the mask to 255.255.254.0. This reduces the subnet addressing scope (x + y) by one bit, but it generates one bit more for the host addressing scope (z). This means the creation of up to 510 hosts. This technique shows an easy way to increase the number of host addresses that are available to either the subnet or the network. It also lets the network grow without major changes. 11.2 Task Summary To configure the DHCP server and clients in this scenario, perform the following steps: 1. Verify hardware, software, and configuration prerequisites. 2. Configure the AS/400 network interface. 3. Configure and start a TCP/IP interface. 4. Gather information to configure the DHCP server. 5. Configure the DHCP server. 6. Start the DHCP server. 7. Configure the Windows 95 DHCP client. 8. Configure the IBM Network Station client. 11.3 Verify Hardware, Software, and Configuration Prerequisites Before you configure your AS/400 system to act as a DHCP server, you must ensure the following: 1. Hardware prerequisites: 1. Ensure your AS/400 system has a LAN adapter installed and cabled to the network. 2. Ensure that all the clients in your network have the correct network interface card. Make certain that you have installed all the drivers you need. 2. Software prerequisites: 1. The DHCP support is part of 5769-SS1, base option 3, OS/400 -- Extended Base Directory Support. 2. Ensure licensed program product 5769-SS1, option 12 (OS/400 -- Host Servers), is installed. 3. For the administrator to configure DHCP on the AS/400 system, you need to ensure that the AS/400 Operations Navigator is installed and configured on the administrator’s PC. 4. For PC clients that connect to the network, DHCP support is included in Windows 95. 5. To use and connect an IBM Network Station, you need to ensure that IBM Network Station Manager for AS/400 code is installed. The license With V4R2, the Client Access code for the client requires no license. Effectively, the base Client Access code is free. Note 240 AS/400 TCP/IP DNS and DHCP Support program number is 5648-B07. However, all references to this product on the AS/400 system and online information (including the product installation command) refer to the product as 5733-A07. 3. Configuration prerequisites: You must add a line description for your AS/400 LAN interface. Usually, a line description already exists. If so, you can skip this step. 11.4 Configuration Overview 1. Configure TCP/IP and add at least one IP Interface. 2. Configure the DHCP server support through Operations Navigator. 3. Change some DHCP attributes. 4. Configure the clients to use DHCP. 11.4.1 Configure TCP/IP Interface on the AS/400 System To configure the TCP/IP interface, perform the following steps: 1. On an AS/400 command line, type the command: GO CFGTCP Press Enter to display the Configure TCP (CFGTCP) menu. 2. Select option 1 (Work with TCP/IP interfaces) to display the Work with TCP/IP Interfaces display (see Figure 211). Figure 211. Work with TCP/IP Interfaces Display 3. Select option 1 to add a TCP/IP interface and specify the TCP/IP address of the host. Press Enter to continue. 4. Add the line description name and the subnet mask for the interface. Work with TCP/IP Interfaces System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type __ _______________ Start Here: Implementing DHCP in a Simple Network 241 5. Press Enter to create the TCP/IP interface. 6. Press F11 to view the status of the interface and verify that the status is active. 11.4.2 Gather Information to Configure the DHCP Server To use the Operations Navigator DHCP configuration effectively, you need to know how you want to set up and manage your networks and subnets with DHCP. You also need to know what address range or ranges you want to use for leasing. Further, you must decide which system is the DHCP server, which ones are the BOOTP/DHCP Relay Agents, and which one performs DHCP backup functions. You must also know the IP addresses that must be reserved for special hosts such as routers, DNS servers, and firewalls. It is useful to refer to a network diagram that shows the subnet masks and IP addresses for your networks, routers, and clients while you are configuring DHCP. The starting point of this scenario is the network diagram shown in Figure 210 on page 238. The information shown in the following tables is based on the network picture and other network data. Add TCP/IP Interface (ADDTCPIFC) Type choices, press Enter. Internet address . . . . . . . . > '10.1.1.2' Line description . . . . . . . . TRNLINE1 Name, *LOOPBACK Subnet mask . . . . . . . . . . 255.255.255.0 Associated local interface . . . *NONE Type of service . . . . . . . . *NORMAL *MINDELAY, *MAXTHRPUT... Maximum transmission unit . . . *LIND 576-16388, *LIND Autostart . . . . . . . . . . . *YES *YES, *NO PVC logical channel identifier ____ 001-FFF + for more values __ X.25 idle circuit timeout . . . 60 1-600 X.25 maximum virtual circuits . 64 0-64 X.25 DDN interface . . . . . . . *NO *YES, *NO TRLAN bit sequencing . . . . . . *MSB *MSB, *LSB Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys If the TCP/IP interface is inactive, you must start the interface by using option 9 on the Work with TCP/IP Interfaces display (see Figure 211 on page 240). You must then press F5 to refresh the display and verify that the interface has started. Note 242 AS/400 TCP/IP DNS and DHCP Support Table 10 shows general information about AS1 as a TCP/IP host. Table 11 provides more specific information about AS1 as a DHCP server. Table 10. Planning the DHCP Server -- AS1 TCP/IP Information Note: The Configuration Reference column in the following tables points to the place in Operations Navigator DHCP server configuration where you can configure the particular parameter. You can specify many of these configuration options through the DHCP configuration wizard the first time you configure DHCP. Table 11. Planning the DHCP Server AS1 -- DHCP Server Overview Host Name As1 Description DHCP server Domain Name mycompany.com IP Address 10 . 1 . 1 . 2 Mask 255.255.254.0 Line Description TRNLINE1 # Question Answer Configuration Reference 1 Is the BOOTP Server already configured on your system? No DHCP configuration wizard 2 Do you want to migrate the BOOTP configuration to DHCP? N/A File -->Migrate BOOTP 3 What is the default lease time for this server? 24 hours Global-->Properties-->Leases 4 Start the DHCP server when TCP/IP starts? Yes Server Properties --> General 5 List the DHCP server IP interfaces that will be serving DHCP clients. 10.1.1.2 See network diagram. 6 List the subnets that will be administered by this DHCP server. 10.1.1.0 See subnet planning table 7 Do you want to add a new subnet to be administered by this server? Yes Global --> New Subnet - Basic Global-->New Subnet - Advanced See subnet planning table 8 Do you want to log DHCP server activity? Yes Server Properties --> Logging 9 Do you want the DHCP server to support any client from any subnet? Yes Server Properties --> Client Support 10 Do you want the DHCP server to support BOOTP clients? No Server Properties --> Client Support 11 Do you want the DHCP server to reject requests from specific clients (for example, for security reasons)? No Global->Properties-> Exclude Client 11 Can your DHCP clients (other than IBM Network Stations) identify the class they belong to? No Start Here: Implementing DHCP in a Simple Network 243 Table 12 provides information about subnet 10.1.0.0 being administered by the DHCP server AS1. Notice that AS1 administers 50% of the IP addresses available while the rest is assigned to AS2, the backup DHCP server. Table 12. Planning the Subnet 10.1.1.0 Administered by AS1 11.4.3 Configure DHCP Server through Operations Navigator If you are configuring DHCP on a system that does not have an existing configuration, Operations Navigator automatically starts the DHCP configuration wizard. This wizard helps you create a basic DHCP server configuration. 12 If answer to 11 is Yes, do you want to add a new class to serve the DHCP clients that belong to that class? N/A Global --> New Class # Question Answer Configuration Reference 1 Subnet name 10.1.1.0 Subnet Properties --> General 2 Subnet description Our_Company Subnet Properties --> General 3 Subnet address 10.1.1.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.1.1 10.1.1.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (12 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router x AS1 Description: Reserved for future router DNS/DHCP server IP address: 10.1.1.1 10.1.1.2 8 Domain Name Server IP address to deliver to clients in this subnet. 10.1.1.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. N/A Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 06 - Domain name server 255.255.254.0 10.1.1.2 Subnet Properties --> Options--> # Question Answer Configuration Reference To reset an existing configuration and start over, perform the following steps: 1. Delete the IFS file dhcpsd.cfg file in /QIBM/UserData/OS400/DHCP. 2. CALL QSYSDIR/QTODDINS from an AS/400 command entry display. This program creates a blank configuration file that the Operations Navigator’s GUI can edit. Tip 244 AS/400 TCP/IP DNS and DHCP Support To start the DHCP configuration wizard, perform the following steps: 1. Start Operations Navigator. 2. Click as1.mycompany.com to select the system name. Figure 212. AS/400 Operations Navigator -- Selecting the System to Configure the DHCP Server 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. If you are not presented with the DHCP configuration wizard, it is likely that a DHCP configuration already exists. To start the wizard and replace the existing configuration, select File > New Configuration. Note Start Here: Implementing DHCP in a Simple Network 245 Figure 213. The DHCP Configuration Wizard 7. Click Next. 8. Select Yes to add a new subnet to the DHCP server. 9. Leave the Twinax IP workstation controller address box blank and click Next. 10.Define the range of addresses to use within the subnet. Figure 214. Subnet Configuration 11.Define a lease time for the client to keep the address served. Click Next to use the default lease time of one day. 12.Specify the IP addresses of the hosts to be excluded. The DHCP server does not deliver these addresses to clients (see Figure 215). 246 AS/400 TCP/IP DNS and DHCP Support Figure 215. Excluded IP Addresses in the Subnet 13.Click Next to not deliver the IP address of a gateway to clients. There is only one subnet in this scenario. 14.Answer Yes to the question "Would you like the DHCP server to deliver domain name server address to clients in this subnet?" Specify the DNS IP address (see Figure 216). Click Next. Figure 216. Configuring the DNS IP Address to Deliver to Clients in this Subnet 15.Answer No to the question "Would you like the DHCP server to deliver domain names to clients in this subnet?" Click Next. 16.Select Support any clients on this subnet. Click Next. 17.Select Yes to start the DHCP server when TCP/IP starts and Select No to start the DHCP server now. Click Next. 18.The DHCP configuration summary window shows all the options that you have selected to this point. Click Finish. Start Here: Implementing DHCP in a Simple Network 247 Figure 217. The DHCP Configuration Summary 19.Now the DHCP server configuration is displayed. Figure 218. DHCP Server Configuration The configuration of a simple network to use DHCP is complete. You have created one subnet from a class A IP address using the mask 255.255.255.0. This allows up to 254 IP addresses within the subnet pool to be served to clients. 1. From the DHCP Server configuration display shown in Figure 218 on page 247, click Subnet 10.1.1.0 to open a context menu and select Properties. 2. Click the Options tab to add a subnet mask that is served to the clients. 3. Highlight option 1, subnet mask, from the Available options window and then click Add. 4. At the bottom of the display, specify the appropriate subnet mask for the clients to use in the Subnet mask window. 248 AS/400 TCP/IP DNS and DHCP Support Figure 219. DHCP Server Options Notice in Figure 219 that the domain name server option 3 is already configured. Specify the IP address of the domain name server when prompted by the configuration wizard. 5. Click OK. Now we are going to assign a longer lease time for any token-ring attached IBM Network Station. A longer lease time can be useful for clients that are not mobile. A longer lease reduces the number of lease renewals the client must request, which in turn reduces some of the network traffic. One way to specify a longer lease time for the token-ring IBM Network Station is to specify a lease time for the class they request. 1. From the DHCP Server configuration display shown in Figure 218 on page 247, right-click Class IBMNSM 1.0.0. to open a context menu. This class is for token-ring-attached IBM Network Stations. The two other available classes are Class IBMNSM 2.0.0 for Ethernet-attached IBM Network Stations and IBMNSM 3.4.1 for Twinax-attached IBM Network Stations. 2. Select Properties. 3. Click the Leases tab. 4. Click Duration. From the pull-down menu, choose weeks and specify 1 to set the lease duration to one week. 5. Click OK. The DHCP server services requests from BOOTP clients. However, this is not the default and must be enabled. To enable the DHCP server to service BOOTP requests, perform these steps: 1. On the DHCP Server configuration display shown in Figure 218 on page 247, right-click DHCP Server -- As1.mycompany.com to open a context menu and select Properties. Start Here: Implementing DHCP in a Simple Network 249 2. Click the Client support tab and click both BOOTP clients and Unlisted clients. 3. Click OK. 11.5 Configuring DHCP Clients To use the DHCP server, clients must support DHCP and be appropriately configured. There are many DHCP clients available on the market, but the tests performed for this book used only IBM Network Station and Windows 95. This section describes how to set up the IBM Network Station and the Windows 95 DHCP clients. Refer to your DHCP client documentation for information about your client’s DHCP support. 11.5.1 Configuring DHCP on Windows 95 Clients To enable DHCP on your Windows 95 workstation, perform the following steps: 1. Double-click My Computer on your desktop. 2. Double-click Control Panel. 3. Double-click Network. 4. Right-click TCP/IP to open a context menu and select Properties. 5. Click Obtain an IP address automatically. If a TCP/IP address already exists in the TCP/IP properties window, it is removed once you click OK. It can be advantageous to record the existing TCP/IP address and subnet mask before you change the setting. Note 250 AS/400 TCP/IP DNS and DHCP Support Figure 220. Enabling DHCP on a Windows 95 Client 6. Click OK. 7. Click OK again and follow the Windows prompts to restart your computer. At this point, your Windows 95 client broadcasts a DHCPDISCOVER message. To verify the current Windows 95 IP configuration, use the Windows program Start Here: Implementing DHCP in a Simple Network 251 WINIPCFG.EXE. It displays a dialog similar to the one shown in Figure 221. Figure 221. Windows 95 IP Configuration Information -- WINIPCFG 11.5.2 Configuring DHCP on the IBM Network Station If the IBM Network Station is new and just out of the box, the default settings within the non-volatile RAM (NVRAM) are set to use DHCP. Once you have completed the previous steps and configured the DHCP server so that is it running on the local subnet, plug the IBM Network Station into your network (attaching a display, keyboard, and mouse) and turn it on. The IBM Network Station attempts to locate a DHCP server first. If a DHCP does not respond, it attempts to find a BOOTP server. If the IBM Network Station has been used previously and you are unsure what has been entered into the NVRAM, perform the following steps to reset the NVRAM to the factory defaults: 1. Power on the IBM Network Station. The IBM logo is followed by a memory and keyboard check. Default settings for the IBM Network Station are to boot DHCP first and BOOTP second (factory settings are DHCP '1', BOOTP '2'). We recommend that the BOOTP boot be disabled (set to 'D'). In a DHCP environment, there is no good reason to boot using BOOTP if the client supports DHCP. If the IBM Network Station times out before the DHCP server can respond, then the IBM Network Station switches to BOOTP mode. This is undesirable because BOOTP leases are permanent. Tip 252 AS/400 TCP/IP DNS and DHCP Support 2. After seeing the message NS0500 Search for Host System, press the ESC key to stop the startup sequence. If prompted for an administrator password, enter it now. This is the password an administrator sets using the IBM Network Station Manager program. 3. Invoke the IBM Network Station Boot Monitor program by pressing the following key sequence: • For 101/102 keyboards: Press and hold Left Shift + Left Alt + Left Ctrl. Press F1. • For 5250/3270 keyboards: Press and hold Left Shift + Left Alt. Press F1. 4. Enter NV at the Boot Monitor prompt (>) to access the NVRAM utility. 5. Enter L to reset the NVRAM. 6. Enter S to save the defaults into NVRAM. 7. Specify Y to the question Are you sure? and press Enter. 8. Power the IBM Network Station off and then on again. It starts with the factory settings previously described. To verify the IP configuration of the IBM Network Station, let it boot at least once so that the configuration values are stored in NVRAM. After one successful boot, you can verify the configuration values by performing the following steps: 1. Stop the boot process at the message NS00500 Search for Host System by pressing the ESC key. You now see the Set up Utility display. 2. Press F5, Set the Network Parameters. You now see the IBM Network Station, Set Network Parameters display. 3. In the IP Addressed from field, use the right-arrow-key to move the cursor and highlight NVRAM. You must display the configuration values that are stored in the NVRAM from the last boot. This displays the current configuration values. 4. Press F12 to cancel. 11.6 Selecting the Bootstrap Host for the IBM Network Station It is possible to have the IBM Network Station send a request to a DHCP server for network information and have that information returned. The returned information contains the name or IP address of a different host that is the server from which the IBM Network Station downloads its kernel and user configuration data (see Figure 222). Start Here: Implementing DHCP in a Simple Network 253 Figure 222. Obtaining Network Configuration from AS1 and Kernel from AS2 You can also configure the AS/400 DHCP server to provide the options that are necessary to instruct the IBM Network Station to load its kernel from a server other than the DHCP host. You can specify up to two systems from which to load the user configuration data. When you have configured the AS/400 DHCP server, the three following IBM Network Station default classes are built for you in Operations Navigator DHCP configuration: IBMNSM 1.0.0 This class is for token-ring attached IBM Network Stations. IBMNSM 2.0.0 This class is for Ethernet attached IBM Network Stations. IBMNSM 3.4.1 This class is for Twinax-attached IBM Network Station. .2 10.1.1.0 255.255.255.0 AS1 DHCP Server .3 TFTP Server Give me my network configuration, please. Go to 10.1.1.3 for your kernel and configuration data Boot me up, please AS2 The previous list is not a comprehensive list of the IBM Network Station classes. At the time of writing, Operations Navigator DHCP configuration included the classes previously described as examples. You must check the IBM Network Station documentation to verify the class name for the model of IBM Network Station you are installing. You must also create the corresponding class. Refer to IBM Network Station Manager Installation and Use, SC41-0664-01 or later, for information on IBM Network Station class names and configuration. Important Verify that you have the most current service pack available for 5763-XD1 V3R1M3, after service pack SF46891, before you configure the IBM Network Station using the default classes in Operations Navigator DHCP configuration. Note 254 AS/400 TCP/IP DNS and DHCP Support You can change these classes to serve the IBM Network Station under them. In this example, the DHCP server provides the network information to the IBM Network Station, which loads its kernel and configuration data from another AS/400 system. The assumption is made that the DHCP server already functions correctly. It is also assumed that the default classes exist and that the user has not deleted them. To configure a bootstrap server for the IBM Network Station other than the DHCP server, perform the following steps: 1. Use the AS/400 Operations Navigator to open the DHCP server configuration window. 2. Right-click the class you want to change (such as IBMNSM 1.0.0 for token-ring attached IBM Network Stations). This opens a context menu. 3. Select Properties. 4. Click the Options tab. 5. Select the 1 tag at the Available options window and click Add to specify the class subnet mask (255.255.255.0 in the example). 6. Select the 66 tag at the Available options window and click Add to specify a trivial FTP (TFTP) server name or IP address. You must specify the IP address of the TFTP server, 10.1.1.3. 7. Option 67 is preconfigured with the boot file path: /QIBM/ProdData/NetworkStation/kernel You do not need to specify this information. It is already configured for you. For this configuration to work, you must add user defined options to the DHCP server settings. To do that, use templates by performing the following steps: 8. Click Templates. 9. Click New. To add user option 211, protocol to use for loading the user configuration data, specify the following data: Defining a subnet mask at the class level is global: all IBM Network Stations, regardless of the subnet location, will get this subnet mask. We are assuming here that there is only one subnet in the network. In general, we recommend defining the subnet mask option at the subnet level. Important Start Here: Implementing DHCP in a Simple Network 255 Figure 223. Option 211 (Configuration Protocol) Template 10.Click OK. 11.Repeat the steps for the user options 212, 213, and 214 as shown in the following figures: Figure 224. Option 212 (Terminal Configuration Server) Template Figure 225. Option 213 (Configuration File Path Name) Template 256 AS/400 TCP/IP DNS and DHCP Support Figure 226. Option 214 (Protocol to Use to Load the Terminal Configuration Data) Template The new tags defined by the templates appear in the Available options window. Now that you have the user-defined tags, you must add the corresponding values. For each of the defined tags, first click the tag number and then click Add to add the value. Refer to Figure 227 on page 256 through Figure 230 on page 258 to add values to the user-defined tags. Figure 227. User-Defined Option 211 -- Protocol to Download Configuration Data Start Here: Implementing DHCP in a Simple Network 257 Figure 228. User-Defined Option 212 -- Terminal Configuration Server Name or IP Address Figure 229. User-Defined Option 213 -- Configuration File Path 258 AS/400 TCP/IP DNS and DHCP Support Figure 230. User-Defined Option 214 -- Protocol to Download Terminal Config (Option 212) For information on installing and configuring the IBM Network Station, refer to IBM Network Station Manager Installation and Use, SC41-0664. Figure 231 shows the options that you selected for the IBM Network Station configuration. Figure 232 on page 259 shows the options values. Figure 231. IBM Network Station Options Summary -- Tags Start Here: Implementing DHCP in a Simple Network 259 Figure 232. IBM Network Station Options Summary -- Values 11.7 Summary This chapter demonstrated how to get started with DHCP in a simple network. First, we helped you to understand your network addressing scheme and collect information about servers, routers, and lease times. Table 11 on page 242 and Table 12 on page 243 helped you gather the information. Next, you learned how to run the Operations Navigator DHCP configuration wizard, which took you through a series of steps to configure the DHCP server. This chapter also explained how to configure two popular DHCP clients: Windows 95 and IBM Network Station. Finally, we described how to make the IBM Network Station boot from a TFTP server that is different from the DHCP server. We also explained how to add options to classes and how to create user-defined options. 260 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 261 Chapter 12. Using Multiple DHCP Servers to Minimize Failures Using multiple DHCP servers decreases the probability of having a DHCP-related network access failure, but it does not guarantee against it. The DHCP protocol does not implement a full backup mechanism such as the one available in DNS through the primary and secondary DNS zone transfer concept. To avoid a single point of failure, configure two or more DHCP servers to serve the same subnet. If one server fails, the other can continue to serve the subnet. Each of the DHCP servers must be accessible either by direct attachment to the subnet or by using a BOOTP/DHCP Relay agent. As you read the rest of this section to determine whether you want to run multiple DHCP servers, keep in mind that you cannot run more than one DHCP server on any individual system. Multiple DHCP servers require multiple systems. Because two DHCP servers cannot serve the same addresses, address pools that you have defined for a subnet must be unique across DHCP servers. Therefore, when you are using two or more DHCP servers to serve a particular subnet, you must divide the complete list of addresses for that subnet among the servers. If a DHCP server for a particular subnet fails, the other DHCP server might be unable to service all of the requests from new clients. This exhausts the server's limited pool of available addresses. You can bias which DHCP server exhausts its pool of addresses first. Some DHCP clients tend to select the DHCP server that offers more options. To bias service toward the DHCP server with 70% of the available addresses, offer fewer DHCP options from the server that holds 30% of the available addresses for the subnet. Note: To bias the service, the client must wait to receive offers from more than one server. In our tests, the Windows 95 DHCP client always accepted the offer from the first server that responded regardless of the number of options offered by the DHCP servers. This chapter describes how to implement DHCP backup techniques under the following conditions: • There is a constraint on the number of IP addresses available, which you must split across different DHCP servers. This is sometimes referred to as the 70/30 split technique. • There is no constraint on the number of available IP addresses. Each server manages a large enough pool of addresses to satisfy DHCP requests from all of the DHCP clients in the network. 12.1 Scenario Overview This scenario is based upon scenario number one (see Chapter 11, “Start Here: Implementing DHCP in a Simple Network” on page 237), in which an AS/400 system acts as a DHCP server in a simple, flat TCP/IP network. However, this scenario introduces a second DHCP server in an attempt to eliminate the single point of failure. This section does not discuss DHCP client configuration. Instead, 262 AS/400 TCP/IP DNS and DHCP Support it concentrates on providing techniques to eliminate possible DHCP server outages. Figure 233. Network Overview Diagram 12.1.1 Scenario Objectives This scenario has the following two objectives: 1. To provide partial DHCP support for clients connecting to the network in the event a DHCP server fails. 2. To show techniques on how to provide full-DHCP client support when there is no TCP/IP addressing constraint. 12.1.2 Scenario Advantages This scenario shows how to provide support for DHCP clients that connect to your network if one of the DHCP servers is offline. It discusses several techniques and shows you how to implement them. 12.1.3 Scenario Disadvantages Some of the techniques discussed in this chapter depend on TCP/IP addressing, which is the limitation that all network administrators and designers face. If you are free to use any type of IP addressing scheme, then implementing a backup DHCP server to support every client in your network is quite achievable. Unfortunately, you might be unable to use the IP addressing scheme of your choice. There is a chance that you already have a functioning TCP/IP network as well. In these instances, you have to make some sacrifices and decide on which method provides you and your network with the best fall-back DHCP support. Using Multiple DHCP Servers to Minimize Failures 263 12.1.4 Scenario Network Configuration The following figure depicts the logical network topology for this scenario: Figure 234. Scenario Network Diagram Showing Primary and Backup DHCP Servers The following scenario characteristics influence the DHCP configuration: • There are two AS/400 systems acting as DHCP servers, where one is the primary server and the other is the backup server. • There is only a single subnet with a mask that allows up to 254 addresses and a client base of 250. • There are no relay agents, routers, or bridges in the example. 12.2 Dividing the Address Pool across Two DHCP Servers This method allows you take your existing TCP/IP address pool on the primary server and allot a percentage of the address pool to the backup DHCP server. This method allows only partial support of the DHCP clients during a failure. This is due to a limited or constrained IP address range. You can divide up your address pool however you want to suit your purpose, but remember that each address pool must be unique. You can try to bias the client toward the primary DHCP server but this may not be possible if the client always accepts the offer from the first DHCP server that responds. If the primary server fails, then the backup still has an unused pool of IP addresses that has not been exhausted. 12.2.1 Objectives There are four objectives: • Divide the current IP address pool between two DHCP servers. • Bias one server, if possible, to look more favorable to the DHCP client by serving or providing more options to the client. 264 AS/400 TCP/IP DNS and DHCP Support • Use a small lease time to return IP addresses to the DHCP pool more quickly. • Use the existing DHCP server as the primary server and the new DHCP server as the backup. 12.2.2 Advantages Using this method to provide a DHCP server backup is straightforward. You do not need to change your TCP/IP addressing structure on the network. Existing clients that have been served network information can remain connected even though the DHCP server is offline, depending on the lease time given and the client implementation of the DHCP code. 12.2.3 Disadvantages If you decide to split the address pool in a percentage manner, you may encounter the following limitations: • If the primary DHCP server that contains 70% of the address pool fails before it exhausts its address pool (or even uses a high percentage of it), then the backup DHCP server does not contain enough addresses to service the remaining clients. • Decreasing the lease time is an attempt to reduce the length of time that the client stays on the network. Think of this as a time share, although it is dependent on the client implementation of DHCP. As such, the client may or may not relinquish the address. 12.3 Task Summary To configure a second DHCP server as a backup and to divide the address pool in this scenario, perform the following steps: 1. Verify hardware, software, and configuration prerequisites. 2. Reduce the primary DHCP server IP address pool and exclude the backup DHCP server from the IP address range. 3. Add the remaining IP addresses to the backup server. 4. Start the primary and backup DHCP servers. 12.3.1 Verify Hardware, Software, and Configuration Prerequisites To verify the prerequisites for the backup DHCP server, see Chapter 11.3, “Verify Hardware, Software, and Configuration Prerequisites” on page 239. 12.3.2 Reduce the Primary DHCP Server IP Address Pool You must decide how you to divide the IP address pool on your existing DHCP server. In this example, there is only one IP address range and no complex subnetting issues. You are dividing the address pool in a 70/30% manner between the primary and backup DHCP servers. You must also exclude the IP address of the backup DHCP server. To divide the existing IP address pool on the primary DHCP server, perform the following steps: Using Multiple DHCP Servers to Minimize Failures 265 1. Start the AS/400 Operations Navigator. 2. Click As1.mycompany.com to select the system name. This is your existing DHCP server that becomes the primary DHCP server. 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP server configuration. 7. Right-click the subnet you want to divide. This opens a context menu. 8. Select Properties 9. Click the Address Pool tab. 10.Exclude the backup DHCP servers’ IP address from this subnet. Click Add and specify the IP address of the backup DHCP server. 11.Reduce the IP address pool range to 70% of its maximum by specifying a new End address as shown in Figure 235 on page 265. Figure 235. Reducing the IP Addressing Range 12.Click OK. The first three IP addresses (10.1.1.1, 10.1.1.2, and 10.1.1.3) are reserved for a future router, the primary DHCP server, and the backup DHCP server. This leaves a possible 251 IP addresses from 10.1.1.4 through 10.1.1.254. Specify the upper limit of the range as 10.1.1.175 on the primary DHCP server. Note 266 AS/400 TCP/IP DNS and DHCP Support 12.3.3 Change the Number of Options on the Primary and Backup DHCP Servers The DHCP clients favor a DHCPOFFER packet that contains more DHCP options than one from another DHCP server. To allow the primary DHCP server to exhaust its IP address range, it is necessary to configure the primary with more options than the backup DHCP server. 12.3.4 Add the Remaining IP Addresses to the Backup Server You must now add an IP address range on the backup host to use during fall back. This address range is the second half of the pool that you split into two on the primary DHCP server. To add an address pool that serves as the backup DHCP server, perform the following steps: Note: Configure the DHCP server by using the AS/400 Operations Navigator GUI. Operations Navigator automatically starts the DHCP configuration wizard, which helps you to create a basic DHCP server configuration. The wizard starts only the first time you configure DHCP on the AS/400 system. These steps are explained in more detail in “Configure DHCP Server through Operations Navigator” on page 243. To start the DHCP configuration wizard, perform the following steps: 1. Start the AS/400 Operations Navigator. 2. Click As5.mycompany.com to select the system name of your backup DHCP server. In the testing environment, an attempt was made to get both a Windows 95 client and the IBM Network Station to favor a certain DHCP server. The attempt to accomplish this by sending more options and setting a longer lease time did not work. A Windows’95 client and the IBM Network Station did not appear to wait long for all incoming DHCPOFFERS to arrive. Both appeared to take the first offer that was sent to them. In the latest level of code being developed for the next release of the IBM Network Station (April/May 98), the boot monitor has been enhanced to look for multiple DHCPOFFERs arriving. Note Using Multiple DHCP Servers to Minimize Failures 267 Figure 236. Selecting As5.mycompany.com Using the AS/400 Operations Navigator 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. 7. Click Next. 8. Select Yes to add a new subnet to the DHCP server. 9. Leave the Twinax IP workstation controller address box blank and click Next. 10.Define the range of addresses to use within the subnet. Specify a range that includes the remaining 126 IP addresses. If the DHCP configuration wizard is not shown, it is likely that a DHCP configuration already exists. To start the wizard and replace the existing configuration, select File > New Configuration, or just add the new subnet to the existing configuration. Note 268 AS/400 TCP/IP DNS and DHCP Support Figure 237. Backup DHCP Server Subnet Range 11.Define a lease time for the client to keep the address served. Click Next to use the default lease time of one day. 12.Click Next to not deliver the IP address of the domain name server. There is no DNS server in this scenario. 13.Select No to the question for setting other options and click Next. 14.Select Yes to Start the DHCP server when TCP/IP starts? and select No to start the DHCP server now? Click Next. 15.The DHCP configuration summary window shows all of the options that you have selected so far. Click Finish. The lease duration is an important consideration that is discussed in more detail in Chapter 12.3.5, “Change the Lease Time on the Primary and Backup DHCP Servers” on page 269. Note Using Multiple DHCP Servers to Minimize Failures 269 Figure 238. DHCP Configuration Summary - AS5.mycompany.com 16.Now that the DHCP server configuration is displayed, add a subnet mask for the clients by right-clicking the new subnet, BackupSubnet, and opening a context menu. 17.Select Properties. 18.Click the Options tab to add a subnet mask that is served to the clients. 19.Highlight option 1, the subnet mask from the Available options window, and then click Add. 20.At the bottom of the display, specify 255.255.255.0 for the subnet mask that the clients use. 21.Click OK. 12.3.5 Change the Lease Time on the Primary and Backup DHCP Servers The lease time is the amount of time that the client is allowed to keep the IP address served using DHCP. The default lease time is one day, or 86400 seconds. Depending on the number of available IP addresses in your address pool, how many of those addresses are typically in use by DHCP clients, and how often the DHCP clients restart or change subnets, you might need to change the duration of the lease. If you have a large number of IP addresses available in the address pool and relatively few DHCP clients, you can increase the length of the lease duration. Increasing the length reduces the number of lease renewals across your network and slightly reduces the load on your DHCP server. If you have a constraint with the number of available IP addresses in your DHCP address pool and if most of those addresses are in use at any given time, it is desirable to reduce the lease duration. Additionally, it is also beneficial to reduce the lease time if your DHCP clients are mobile, changing from one subnet to 270 AS/400 TCP/IP DNS and DHCP Support another, and requiring a new IP address. A reduction in the lease time returns the IP address to the pool more quickly. This allows it to be available for another client. The trade-off here is that there is an increased amount of network traffic that requests lease renewals, and the DHCP server needs to service the renewal requests. This trade-off is considered minor because it is usually more important to have the clients connect to the network. If the primary DHCP server fails in fall-back scenarios, you need to set the lease time to a smaller value. The clients attempt to renew their leases on the IP address when half of the lease time has expired. They again make the attempt when approximately 85% of the time has expired (provided the DHCP server has not extended the lease by then). 12.3.6 Start the Primary and Backup DHCP Servers It is suggested that if you use the method previously described, then you need to start the primary DHCP server first and leave it running until it has exhausted nearly all of its IP range. Indeed, it is conceivable to leave the backup server offline until the primary server fails. This is due to the fact that each DHCP client implementation appears slightly different. During the testing for this book, the primary DHCP client was not favored over the backup even when more options were provided to the client. To ensure that each client is always served an address from the primary server from initialization, start the primary server first. Once the client has contacted the DHCP server, it always attempts to return and request a lease renewal from the primary. To start the DHCP servers, perform the following steps: 1. From the AS/400 Operations Navigator, right-click DHCP to open a context menu (see Figure 239 on page 270). 2. Select Start. 3. Repeat the process for the backup server. Figure 239. Starting the DHCP Server Using Multiple DHCP Servers to Minimize Failures 271 12.4 Providing Full-DHCP Client Support This section describes how to provide full DHCP client support from two DHCP servers in a simple TCP/IP network that has no constraints with IP addresses. The configuration on each DHCP server has a large enough IP address range or pool to service 100% of the clients in the network. In this example, the network has 250 clients. Use a subnet mask over a class A private network to provide up to 510 IP addresses in the pool. 12.4.1 Objectives Using two DHCP servers, this scenario demonstrates a method that allows each server to service 100% of the client base, even if one of the DHCP servers is offline for any reason. 12.4.2 Advantages This method has the major advantage of allowing all of the clients to connect to the network during a DHCP failure. 12.4.3 Disadvantages This scenario assumes that you have an unlimited range of TCP/IP addresses. As such, many IP addresses are not in use at any given time. You can consider these IP addresses wasted. 12.4.4 Network Addressing Scope Planning This example uses the network 10.1.1.0. We recommend that you perform a hierarchical partitioning of a network to ease administration. To accomplish this, enter: 10.x.y.z where x = site or region, y = department, z = hosts, x + y = subnet. The small, example network (10.1.1.0 and a mask of 255.255.255.0) allows up to 254 hosts. If you need to expand your network to connect to more hosts, change the mask to 255.255.254.0. This reduces the subnet addressing scope (x + y) by one bit, but it creates one extra bit for the host-addressing scope (z). That generates up to 510 hosts. This means that the network address changes to 10.1.0.0, and the host address range is from 0.0.0.1 through 0.0.1.254. This technique shows an easy way to increase the number of host addresses that are available to the subnet or the network. It also lets the network grow without too much change. You can use this technique to increase the number of supported clients by using the appropriate subnet mask. For example, a subnet mask of 255.255.252.0 gives you 1022 addresses with which you can support up to 511 clients on each DHCP server. If you use this method to allow support for up to 250 hosts on each DHCP server, they have a combined TCP/IP address range for 510 clients. 272 AS/400 TCP/IP DNS and DHCP Support 12.4.5 Task Summary The steps required to configure this scenario are as follows: 1. Verify hardware, software, and configuration prerequisites. 2. Enlarge the primary DHCP server IP address pool. 3. Add the remaining IP addresses to the backup server. 4. Start the primary and backup DHCP servers. 12.4.6 Verify Hardware, Software, and Configuration Prerequisites To verify the prerequisites for the backup DHCP server, see Chapter 11.3, “Verify Hardware, Software, and Configuration Prerequisites” on page 239. 12.4.7 Enlarge the Primary DHCP Server IP Address Pool In this example, use the IP address range from 10.1.0.1 through 10.1.1.254 with a mask of 255.255.254.0. This provides up to 510 host addresses on network 10.1.0.0. Refer to Chapter 12.4.4, “Network Addressing Scope Planning” on page 271. The host portion of the network address is divided in half, allowing the primary DHCP server to have 255 addresses and the backup server to have 255 addresses. In this example, you cannot exclude the IP address of the primary server, backup server, or router because they exist in the address range that you configure on the backup DHCP server. To create the new addressing pool on the primary DHCP server, perform the following steps: 1. Start the AS/400 Operations Navigator. 2. Click As1.mycompany.com to select the system name. This is the existing DHCP server that becomes the primary DHCP server. 3. Double-click on Network. 4. Double-click on Server. 5. Double-click on OS/400. 6. Double-click on DHCP. This starts the DHCP server configuration. 7. Right-click the subnet that you want to divide. This opens a context menu. Select Properties. 8. Click the Address Pool tab. Because you are changing the subnet range by altering the mask, you must also ensure that the TCP/IP interface for each physical connection to the same network also has had the mask changed to 255.255.254.0. Note Using Multiple DHCP Servers to Minimize Failures 273 9. Add the new TCP/IP address range. In this case, use the range from 10.1.0.1 through 10.1.0.254 with a mask of 255.255.254.0, shown in Figure 240 on page 273. Figure 240. As1.mycompany.com Subnet Configuration 10.Click OK. 12.4.8 Add the Remaining IP Addresses to the Backup DHCP Server You need to configure the remaining 50% of the IP address pool on the backup DHCP server. In this example, add the range from 10.1.1.0 through 10.1.1.254 with a subnet mask of 255.255.254.0. This provides the other 255 host addresses on network 10.1.0.0. Please refer to Chapter 12.4.4, “Network Addressing Scope Planning” on page 271. You must also exclude the IP address of the primary server, backup server, and future router because their addresses exist in this range. The steps to do this are contained here. To create the new addressing pool on the backup DHCP server, perform the following steps: 1. Start the AS/400 Operations Navigator. 2. Click As5.mycompany.com to select the system name. This is your existing backup DHCP server. 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP server configuration. 274 AS/400 TCP/IP DNS and DHCP Support 7. Right-click on the subnet that you want to divide. This opens a context menu. Select Properties. 8. Click the Address Pool tab. 9. Add the new TCP/IP address range. In this case, use the range from 10.1.1.0 through 10.1.1.254 with a mask of 255.255.254.0, shown in Figure 241 on page 274. Figure 241. As5.mycompany.com Subnet Configuration 10.Click Add to exclude the IP addresses of the primary server, backup server, and future router. 11.Click OK. 12.4.9 Start the Primary and Backup DHCP Servers You can now start the primary and backup DHCP servers as follows: 1. From the Operations Navigator, right-click DHCP to open a context menu (see Figure 242 on page 275). 2. Select Start. 3. Repeat the process for the backup server. Using Multiple DHCP Servers to Minimize Failures 275 Figure 242. Starting the DHCP Server 12.5 Summary It is important to remember that installing a backup DHCP server on your network does not guarantee that addresses are available for all clients during an unplanned primary DHCP server outage. If you have a constrained IP addressing scheme, then you can only provide a partial fall-back support. This is sometimes referred to as the 70/30 split technique. You can favor which server the DHCP client chooses by providing more options to the client. In the test environment, however, the first DHCP server that responded was selected, regardless of the options offered. If you do not have any constraints on your IP addressing scheme, it is possible to serve every client from any server. Using this method results in most of the IP addresses not being used unless one of the DHCP servers fails. The IP addresses in the pool are effectively unusable by any other non-DHCP client. This is considered wasteful but you might be able to afford it if you are using a class A 10.x.x.x network. There is another alternative with a primary DHCP server and a backup DHCP server using a BOOTP/DHCP Relay Agent to forward packets to both servers but that introduces a delay when forwarding to the backup DHCP server. This option is discussed in Chapter 14, “Multiple Subnets, DHCP Servers, and Relay Agents” on page 313. Multiple DHCP servers require multiple systems. The DHCP servers must be accessible either by direct attachment to the subnet or by using a BOOTP/DHCP Relay Agent. 276 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 277 Chapter 13. Multiple Subnets and DHCP Servers If your network is larger than the one shown in 11, “Start Here: Implementing DHCP in a Simple Network” on page 237, you may have to work with multiple LANs and subnets. This chapter covers the considerations you need to take into account when providing DHCP services to a simple, multi-LAN and multi-subnet network. 13.1 Scenario Overview This scenario provides DHCP services to clients that are connected to multiple LANs. The network is still fairly simple. It has no routers and only two AS/400 systems that you can use as DHCP servers. Further, it is assumed that there are no more than 254 clients in subnet A and no more than 510 clients in subnet B. The DHCP server AS1 is configured to serve addresses to the terminals that request an address in subnet A. The DHCP server AS5 is configured to serve addresses to the terminals that request an address in subnet B. However, there is also a need for communication between the two subnets by way of IP datagram forwarding. Therefore, a second TCP/IP interface is configured on the AS1 AS/400 system. Figure 243 shows the sample network. Figure 243. Scenario Network Diagram 13.1.1 Scenario Objectives The objectives of this scenario are to: Subnet B Subnet A C1 PC C2 Primary DNS Secondary DNS AS1 DHCP Server AS5 DHCP Server NT Server AS9 LAN Printer C2 Network Station mycompany.com NT1 278 AS/400 TCP/IP DNS and DHCP Support 1. Show that if you configure AS1 as a DHCP server for subnet A and AS5 as a DHCP server for subnet B, AS1 still receives DHCPDISCOVER packets from subnet B through the second IP interface (10.1.0.3 in the sample network). 2. Show how to configure AS1 as a DHCP server for both subnet A and subnet B and how to use AS5 as a backup for subnet B only. This scenario assumes that subnet B is more critical to the business. 13.1.2 Scenario Advantages This scenario shows how you can implement a simple, multi-subnet network using a single AS/400 system as a gateway, a DHCP server, and a DNS server. The proposed solution implements a DHCP server on AS1, which is configured as a multi-homed host. A multi-homed host is a host with two or more physical interfaces to multiple subnets. Even though it is not shown in this scenario, you can configure AS5 as a backup DHCP server for network B. 13.1.3 Scenario Disadvantages 1. The DHCP protocol flow can be confusing. When C2 broadcasts a DHCPDISCOVER packet, only the DHCP server AS1 sees it. When C1 broadcasts a DHCPDISCOVER packet, both AS1 and AS5 see it because AS1 is multi-homed. If you have not configured the AS1 interface on subnet B for DHCP serving, it logs the DHCP packets that it is not servicing (those that are intended for AS5). 2. There is no full DHCP server backup. If AS1 fails, subnet A cannot have access to a DHCP server. AS5 performs DHCP server backup functions for subnet B. To provide full DHCP server backup, you need to introduce a relay agent to back up AS1. Refer to Chapter 14, “Multiple Subnets, DHCP Servers, and Relay Agents” on page 313, for more information on relay agents. 13.1.4 Scenario Network Configuration Figure 244 on page 279 shows the network detail for this scenario. Note that subnet B has a network ID of 10.1.0.0 and a mask of 23 contiguous bits, allowing a range of 510 TCP/IP addresses. The main characteristics of this scenario’s network are as follows: • There are two physical network segments. • There are two subnets, one for each physical segment. • There is one multi-homed host, AS1. It has one physical interface on subnet A and another one on subnet B. • AS1 is the gateway between both subnets with one physical interface on each one and IP forwarding turned on. • There are two DHCP servers, AS1 and AS%, in the first part of this scenario. AS1 serves subnet A and AS5 serves subnet B. • There is only one DHCP server, AS1, serving both networks in the second part of this scenario. • There is a primary DNS server, AS1, and a secondary DNS server, AS5. • The network implements a class A TCP/IP addressing scheme, and subnet B uses a complex mask, 255.255.254.0. Multiple Subnets and DHCP Servers 279 Figure 244. Scenario Network Topology 13.2 Task Summary To configure the DHCP server and clients in this scenario, perform the following steps: 1. Configure and start a TCP/IP interface on both DHCP servers. 2. Plan the DHCP server configuration and gather information to configure the DHCP servers. 3. Configure the DHCP support on AS1 for Subnet A and on AS5 for Subnet B. 4. Start the DHCP server support on DHCP AS1 and AS5. 5. Configure the Windows 95 client for DHCP support. 6. Configure the IBM Network Station client for DHCP support. Subnet B Subnet A C1 PC 0004ac946b53 C2 10.1.9.0 10.1.0.0 .3 .2 .2 255.255.254.0 255.255.255.0 Primary DNS Secondary DNS AS1 DHCP Server AS5 DHCP Server .9 .10 NT Server AS9 .200 LAN Printer C2 Network Station 0000e5683796 mycompany.com NT1 280 AS/400 TCP/IP DNS and DHCP Support 13.3 Configuration Overview 1. Configure TCP/IP and add an IP interface on AS5 and two IP interfaces on AS1. 2. Configure the DHCP server support through Operations Navigator on both AS/400 systems. 3. Configure the clients to use DHCP. 13.3.1 Configuring TCP/IP Interfaces on AS1 To configure the TCP/IP interface, perform the following steps: 1. On an AS/400 command entry display, type the command: GO CFGTCP Press Enter to display the Configure TCP (CFGTCP) menu. 2. Select option 1 (Work with TCP/IP interfaces) to display the Work with TCP/IP Interfaces display (see Figure 245). 3. Select option 1 to add a TCP/IP interface and specify the TCP/IP address of the host. Press Enter to continue. 4. Add the line description name and the subnet mask for the interface. 5. Press Enter to create the TCP/IP interface. 6. After you have done this for the interface addresses 10.1.0.3 and 10.1.9.2, you see a display similar to the one in Figure 245. Figure 245. Work with TCP/IP Interfaces -- AS1 7. Press F11 to view the status of the interface and verify that the status is active. If it is not, start the interface with option 9. 13.3.2 Gathering Information to Configure DHCP Servers To use Operations Navigator DHCP configuration effectively, you need to know how you want to set up and manage your networks and subnets with DHCP. You also need to know what address range or ranges you want to use for leasing. You must decide which system is the DHCP primary server and which one performs Work with TCP/IP Interfaces System: As1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type 10.1.0.3 255.255.254.0 TRNLINE1 *TRLAN 10.1.9.2 255.255.255.0 TRNLINE2 *TRLAN 127.0.0.1 255.0.0.0 *LOOPBACK *NONE Bottom F3=Exit F5=Refresh F6=Print list F11=Display interface status F12=Cancel F17=Top F18=Bottom Multiple Subnets and DHCP Servers 281 DHCP backup functions. Further, you need to know which IP addresses to reserve for special hosts such as routers, DNS servers, and firewalls. It is useful to refer to a network diagram that shows the subnet masks and IP addresses for your networks, routers, and clients while you are configuring DHCP. The starting point in this scenario is the network diagram that is shown in Figure 244 on page 279. The information shown in the following tables is based upon the network picture and other network data. 13.3.2.1 AS1 DHCP Server and Administered Subnets Information Table 13 shows general information about AS1 as a TCP/IP host, and Table 14 provides more specific information about AS1 as a DHCP server. Table 13. Planning the DHCP Server - AS1 TCP/IP Information Note: The Configuration Reference column in the following tables points to the place in Operations Navigator DHCP server configuration where you can configure the particular parameter. You can enter many of these configuration options through the DHCP configuration wizard the first time you configure DHCP. Table 14. Planning the DHCP Server AS1 -- DHCP Server Overview Host Name AS1 Description Subnet A DHCP server Domain Name mycompany.com IP Address 10 . 1 . 0. 3 Mask 255.255.254.0 Line Description TRNLINE1 IP Address 10 . 1 . 9 . 2 Mask 255.255.255.0 Line Description TRNLINE2 # Question Answer Configuration Reference 1 Is the BOOTP Server already configured on your system? No DHCP configuration wizard 2 Do you want to migrate the BOOTP configuration to DHCP? N/A File -->Migrate BOOTP 3 What is the default lease time for this server? 12 hours Global-->Properties-->Leases 4 Start the DHCP server when TCP/IP starts? Yes Server Properties --> General 5 List the DHCP server IP interfaces that will be serving DHCP clients. 10.1.9.2 See network diagram. 6 List the subnets that will be administered by this DHCP server. 10.1.9.0 See subnet planning table 7 Do you want to add a new subnet to be administered by this server? Yes Global --> New Subnet - Basic Global-->New Subnet - Advanced See subnet planning table 282 AS/400 TCP/IP DNS and DHCP Support Table 15 provides information about subnet 10.1.9.0 being administered by AS1 DHCP server. Notice that AS1 administers 100% of the IP addresses available in this subnet. Table 15. Planning the Subnet 10.1.9.0 Administered by AS1 from IP Interface 10.1.9.2 8 Do you want to log DHCP server activity? Yes Server Properties --> Logging 9 Do you want the DHCP server to support any client from any subnet? Yes Server Properties --> Client Support 10 Do you want the DHCP server to support BOOTP clients? No Server Properties --> Client Support 11 Do you want the DHCP server to reject requests from specific clients (for example, for security reasons)? No Global->Properties-> Exclude Client 11 Can your DHCP clients (other than IBM Network Stations) identify the class they belong to? No 12 If answer to 11 is Yes, do you want to add a new class to serve the DHCP clients that belong to that class? N/A Global --> New Class # Question Answer Configuration Reference 1 Subnet name Subnet_A_10.1.9. 0 Subnet Properties --> General 2 Subnet description Services Subnet Properties --> General 3 Subnet address 10.1.9.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.9.1 10.1.9.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (12 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router x AS1 NT1 Description: Reserved for future router DNS/DHCP server NT file server IP address: 10.1.9.1 10.1.9.2 10.1.9.10 8 Domain Name Server IP address to deliver to clients in this subnet. 10.1.9.2 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.9.2 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.254.0 10.1.9.2 10.1.9.2 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference Multiple Subnets and DHCP Servers 283 13.3.2.2 AS5 DHCP Server and Administered Subnets Information Table 16 shows general information about AS5 as a TCP/IP host and Table 17 provides more specific information about AS5 as a DHCP server. Table 16. Planning the DHCP Server -- AS5 TCP/IP Information Note: The Configuration Reference column in the following tables points to the place in Operations Navigator DHCP server configuration where you can configure the particular parameter. You can enter many of these configuration options through the DHCP configuration wizard the first time you configure DHCP. Table 17. Planning the DHCP Server AS5 -- DHCP Server Overview Host Name As5 Description Subnet B DHCP server Domain Name mycompany.com IP Address 10 . 1 . 0. 2 Mask 255.255.254.0 Line Description TRNLINE1 # Question Answer Configuration Reference 1 Is the BOOTP Server already configured on your system? No DHCP configuration wizard 2 Do you want to migrate the BOOTP configuration to DHCP? N/A File -->Migrate BOOTP 3 What is the default lease time for this server? 12 hours Global-->Properties-->Leases 4 Start the DHCP server when TCP/IP starts Yes Server Properties --> General 5 List the DHCP server IP interfaces that will be serving DHCP clients. 10.1.0.2 See network diagram. 6 List the subnets that will be administered by this DHCP server. 10.1.0.0 See subnet planning table 7 Do you want to add a new subnet to be administered by this server? Yes Global --> New Subnet - Basic Global-->New Subnet - Advanced See subnet planning table 8 Do you want to log DHCP server activity? Yes Server Properties --> Logging Option 67 (boot file name) and option 51 (IP address lease time) for IBM Network Station clients on this subnet are those shipped by default in Class IBMNSM 1.0.0. This class is for token-ring attached IBM Network Stations and the default values are as follows: Option 67: /QIBM/ProdData/NetworkStation/kernel Option 51: 1 day Note 284 AS/400 TCP/IP DNS and DHCP Support Table 18 provides information about subnet 10.1.0.0 administered by AS5 DHCP server. Notice that AS5 administers 100% of the IP addresses available in this subnet. Table 18. Planning the Subnet 10.1.0.0 Administered by AS5 from IP Interface 10.1.0.2 9 Do you want the DHCP server to support any client from any subnet? Yes Server Properties --> Client Support 10 Do you want the DHCP server to support BOOTP clients? No Server Properties --> Client Support 11 Do you want the DHCP server to reject requests from specific clients (for example, for security reasons)? No Global->Properties-> Exclude Client 11 Can your DHCP clients (other than IBM Network Stations) identify the class they belong to? No 12 If answer to 11 is Yes, do you want to add a new class to serve the DHCP clients that belong to that class? N/A Global --> New Class # Question Answer Configuration Reference 1 Subnet name Subnet_B_10.1.0. 0 Subnet Properties --> General 2 Subnet description Marketing and Manufacturing Subnet Properties --> General 3 Subnet address 10.1.0.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.254.0 Subnet Properties --> Address Pool 5 Address range 10.1.0.0 10.1.1.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (12 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router x AS5 AS1 Description: Reserved for future router DNS/DHCP server DNS/Gateway IP address: 10.1.0.1 10.1.0.2 10.1.0.3 8 Domain Name Server IP address to deliver to clients in this subnet. 10.1.0.3 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.0.3 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.254.0 10.1.9.2 10.1.9.2 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference Multiple Subnets and DHCP Servers 285 13.3.3 Configuring DHCP Server Support in AS1 You must configure the DHCP server using the AS/400 Operations Navigator GUI for servicing the DHCP clients on subnet A. You are configuring DHCP on a system without an existing configuration. Refer to Chapter 11.4.3, “Configure DHCP Server through Operations Navigator” on page 243, for information on how to reset the existing DHCP configurations and start over. Operations Navigator automatically starts the DHCP Configuration Wizard. This wizard helps you create a basic DHCP server configuration. To start the DHCP configuration wizard, perform the following steps: 1. Start the AS/400 Operations Navigator. 2. Click the system As1.mycompany.com to select it. 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. 7. Click Next. 8. Select the default lease time for the whole network (all subnets administered by this server): 12 hours Click Next. 9. Select Yes to add a new subnet to the DHCP server. Start configuring Subnet A -- 10.1.9.0. 10.Answer No to the question “Will this subnet manage twinax devices?” Click Next. 11.Select Define subnet based on entire subnet. You are configuring the whole subnet range. 12.Specify the information as shown in Figure 246. Click Next. The sequence of the following steps might be different in your situation, depending on such factors as whether you already have a BootP table on your system and the navigation path you have chosen. Consider this as just an example. The important point is that you understand how to implement the planned configuration. Note 286 AS/400 TCP/IP DNS and DHCP Support Figure 246. Defining Subnet A -- 10.1.9.0 Based on Entire Subnet 13.Exclude the IP addresses of the hosts in this subnet that you have already configured with permanent IP addresses. In this scenario, those hosts are reserved for a router in the event that one is needed in the future for AS1 and NT1. Figure 247. Exclude IP Addresses Already Assigned to Hosts in Subnet A -- 10.1.9.0 Click Next. 14.Accept Inherit the server’s default lease time (12 hours). 15.Answer Yes to the question “Would you like the DHCP Server to deliver gateway addresses to clients in this subnet?” Multiple Subnets and DHCP Servers 287 16.Add the gateway IP address that is the AS1 interface on this subnet. In this simple network, the AS/400 system AS1 acts as a gateway between the two subnets. Figure 248. Configuring Subnet A’s Gateway Information Click Next. 17.Answer Yes to the question “Would you like DHCP to deliver domain name server addresses to clients in this subnet?” In this simple network, the same AS/400 system (AS1) provides DNS services for the whole network. 18.Add the DNS IP address. Notice that the DNS server for mycompany.com runs on the same AS/400 system, AS1 (see Figure 244 on page 279). Figure 249. Configuring DNS Information for Clients in Subnet A -- AS1 DHCP Server Click Next. 288 AS/400 TCP/IP DNS and DHCP Support 19.Answer No to the question “Would you like the DHCP server to deliver the domain name to the clients in this subnet?” 20.Answer No to the question “Would you like to set other options for this subnet?” Click Next. 21.Select Support any client for this subnet and click Next. 22.Answer Yes to the question “Do you want the DHCP server to start when TCP/IP starts?” 23.Answer No to the question “Do you want the DHCP server to start now?” 24.At the New DHCP Configuration Summary window, click Finish. Figure 250. New DHCP Configuration Summary -- Subnet_A_10.1.9.0 25. The DHCP Server Configuration for As1.mycompany.com window now looks similar to the one shown in Figure 251. Figure 251. DHCP Server Configuration -- As1.mycompany.com 26.To add the subnet mask option for the subnet, perform the following steps: Multiple Subnets and DHCP Servers 289 1. Right-click Subnet Subnet_A_10.1.9.0 to open a context menu. Select Properties. 2. Click Options. 3. Select tag 1, Subnet mask, and click Add. 4. In the Subnet Mask field, specify the following mask for all clients in subnet A: 255.255.255.0 See the example in Figure 252. Figure 252. Adding the Subnet Mask Option for Subnet A You have completed the configuration of Subnet A in DHCP server AS1. Figure 253 on page 290 shows the options configured for Subnet_A_10.1.1.0 on DHCP Server AS1. 290 AS/400 TCP/IP DNS and DHCP Support Figure 253. Configured Options for Subnet A -- AS1 DHCP Server 13.3.4 Configuring TCP/IP Interfaces on AS5 To configure the TCP/IP interface, perform the following steps: 1. On an AS/400 command line, type the command: GO CFGTCP Press ENTER to display the Configure TCP (CFGTCP) menu. 2. Select option 1 (Work with TCP/IP interfaces) to display the Work with TCP/IP Interfaces display (see Figure 245). 3. Select option 1 to add a TCP/IP interface and specify the TCP/IP address of the host. Press Enter to continue. 4. Add the line description name and the subnet mask for the interface. 5. Press Enter to create the TCP/IP interface. You now see a display similar to the one shown in Figure 254. Figure 254. Work with TCP/IP Interfaces -- AS5 Work with TCP/IP Interfaces System: As5 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type 10.1.0.2 255.255.254.0 TRNLINE *TRLAN 127.0.0.1 255.0.0.0 *LOOPBACK *NONE Bottom F3=Exit F5=Refresh F6=Print list F11=Display interface status F12=Cancel F17=Top F18=Bottom Multiple Subnets and DHCP Servers 291 6. Press F11 to view the status of the interface and verify that the status is active. 13.3.5 Configuring DHCP Server Support on AS5 To start the DHCP configuration wizard, perform the following steps: 1. Start the AS/400 Operations Navigator. 2. Click the system As5.mycompany.com to select it. 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. 7. Click Next. 8. Select the following default lease time for the network that you want this DHCP server (As5.mycompany.com) to configure: 12 hours Click Next. 9. Select Yes to add a new subnet to the DHCP server. Start configuring Subnet B -- 10.1.0.0. Click Next. 10.Answer No to the question “Will this subnet manage twinax devices?” Click Next. 11.Select Define subnet based on entire subnet. Configure the entire subnet range. 12.Specify the information shown in Figure 255. Click Next. The sequence of the following steps might be different in your situation, depending on such factors as whether you already have a BootP table on your system and the navigation path you have chosen. Consider this as just an example. The important point is that you understand how to implement the planned configuration. Note 292 AS/400 TCP/IP DNS and DHCP Support Figure 255. Defining Subnet B -- 10.1.0.0 Based on Entire Subnet 13.Exclude the IP addresses of the hosts that you have already configured with permanent IP addresses in this subnet. In this scenario, those hosts are 10.1.0.1 and are reserved for routers if needed in the future -- AS1 (10.1.0.3), AS5 (10.1.0.2), AS9 (10.1.0.9), and LAN Printer (10.1.0.200). Figure 256. Exclude IP Addresses Already Assigned to Hosts in Subnet B -- 10.1.0.0 Click Next. 14.Accept Inherit the server’s default lease time (12 hours). 15.Answer Yes to the question “Would you like the DHCP Server to deliver gateway addresses to clients in this subnet?” 16.Add the gateway IP address that is the AS1 interface on this subnet. In this simple network, the AS/400 system AS1 acts as a gateway between the two Multiple Subnets and DHCP Servers 293 subnets. See Figure 257 on page 293. Figure 257. Configuring Subnet B’s Gateway Information Click Next. 17.Answer Yes to the question “Would you like DHCP to deliver domain name server addresses to clients in this subnet?” In this simple network, the same AS/400 system (AS1) provides DNS services for the whole network. AS5 is the secondary DNS. 18.Add the DNS IP address that is the DNS server on AS1. See Figure 258. Figure 258. Configuring DNS Information for Clients in Subnet B -- AS5 DHCP Server Click Next. 19.Answer No to the question “Would you like the DHCP server to deliver the domain name to the clients in this subnet?” 294 AS/400 TCP/IP DNS and DHCP Support 20.Answer No to the question “Would you like to set other options for this subnet?” Click Next. 21.Select Support any client for this subnet and click Next. 22.Answer Yes to the question “Do you want the DHCP server to start when TCP/IP starts?” 23.Answer No to the question “Do you want the DHCP server to start now?” 24.At the New DHCP Configuration Summary window, click Finish. Figure 259. New DHCP Configuration Summary -- Subnet_B_10.1.0.0 25. The DHCP Server Configuration for As5.mycompany.com is displayed in a window similar to the one in Figure 260. Figure 260. DHCP Server Configuration -- As5.mycompany.com 26.To add the subnet mask option for the subnet, perform the following steps: 1. Right-click Subnet Subnet_B_10.1.0.0 to open a context menu. Select Properties. 2. Click Options. Multiple Subnets and DHCP Servers 295 3. Select tag 1, Subnet mask, and click Add. 4. In the Subnet Mask field, specify the following mask for all clients in subnet B: 255.255.254.0 See Figure 261. Figure 261. Adding the Subnet Mask Option for Subnet B You have completed the configuration of Subnet B in DHCP server AS5. Figure 262 shows the options configured for Subnet_B_10.1.0.0 on DHCP server AS5. Figure 262. Configured Options for Subnet B -- AS5 DHCP Server 13.3.6 Start the DHCP Server Support on Both Systems You can start the DHCP server support on the AS/400 system from either the command line interface or the AS/400 Operations Navigator interface. 296 AS/400 TCP/IP DNS and DHCP Support To start the DHCP server support from the AS/400 command line interface, type STRTCPSVR *DHCP. To start the DHCP server support from the AS/400 Operations Navigator interface, perform the following steps: 1. Click the system As1.mycompany.com to select it. 2. Double-click Network. 3. Double-click Server. 4. Double-click OS/400. 5. Right-click DHCP. 6. Select Start. 13.3.7 Configuring DHCP Clients For DHCP client configuration, refer to Chapter 11.5, “Configuring DHCP Clients” on page 249. Figure 263 shows the configuration on the C1 client from WINIPCFG.EXE. Figure 263. PC Client C1 on Subnet B after Receiving TCP/IP Configuration from DHCP Server AS5 Multiple Subnets and DHCP Servers 297 13.3.8 Analyzing the DHCP Logs After implementing both DHCP servers as described in the previous sections, take a close look at the DHCP logs (dhcpsd.log file) on both servers. For information on how to create and use the DHCP log, refer to Chapter 17.2, “Starting and Reading the DHCP Logging Utility” on page 407. By analyzing the DHCP log on AS1, you notice that this server was receiving DHCPDISCOVER packets from clients in subnet B. Even when you do not configure a subnet B range of IP addresses to service from the AS1 DHCP server, AS1’s interface on subnet B (10.1.0.3) received the requests. As expected, the clients on subnet B received no response from AS1. From the AS1 DHCP log, you see the flow generated by the PC client C1 on network B (MAC address 0004ac946b53). Refer to Section 13.3.8.1, “DHCP Log on AS1” on page 297 for an example of such a log. On the other hand, you see the flow on the AS1 DHCP log that is generated by the Network Station DHCP client on subnet A (MAC address 000e5683796) starting with a DHCPDISCOVER served by AS1. See Section 13.3.8.1, “DHCP Log on AS1” on page 297. The AS5 DHCP log shows only DHCP packets from subnet B. In the log, include the flow generated by the PC client C1. Refer to Section 13.3.8.2, “DHCP Logs on AS5” on page 300. 13.3.8.1 DHCP Log on AS1 Client PC x’0004ac946b53’ From Interface 10.1.0.3 =======> DHCPDISCOVER with no reply generated <============== 02/12 19:51:37 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER 02/12 19:51:37 : TRACE: .. process_bootrequest: Request is self-consistent 02/12 19:51:37 : TRACE: Packet from client 6-0x0004ac946b53 was accepted by user exit verification processing. ...................... log truncated ............................... 02/12 19:51:37 : TRACE: ...... locateConfiguredClient: no ipaddress supplied, returning 02/12 19:51:37 : INFO: ................ pr_queryAddr: 10.1.0.3 has no profile in this server ...................... log truncated ............................... 02/12 19:51:39 : INFO: ............ pr_queryAddr: 10.1.0.3 has no profile in this server 02/12 19:51:39 : OBJERR: .......... am_addressClient: Failed to query address portfolio for 10.1.0.3 02/12 19:51:39 : WARNING:.......... am_addressClient: Request might have come from a (sub)net for which 02/12 19:51:39 : WARNING:.......... am_addressClient: this server is not configured to manage 02/12 19:51:39 : TRACE: ............ nonvolatilizeAR: function Entered 02/12 19:51:39 : TRACE: ............ nonvolatilizeCR: function Entered 298 AS/400 TCP/IP DNS and DHCP Support 02/12 19:51:39 : OBJERR: ........ am_reserve: Failed to address client 6-0x0004ac946b53 02/12 19:51:39 : OBJERR: .... processDISCOVER: Failed to have an address reserved for 6-0x0004ac946b53 02/12 19:51:39 : ACTION: .. reply_generator: No reply is generated 02/12 19:51:39 : TRACE: .. No reply is to be generated. IBM Network Station x’0000e56h3796’ on Interface 10.1.9.2 ========> DHCPDISCOVER from IBM Network Station <=========== 02/12 19:53:38 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/12 19:53:38 : TRACE: .. receiveMailbox: recvfrom got 548 bytes. 02/12 19:53:38 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/12 19:53:38 : TRACE: Size of incoming packet is: 548 02/12 19:53:38 : TRACE: .. process_bootrequest: function entered 02/12 19:53:38 : TRACE: .. process_bootrequest: received packet xid = b03 02/12 19:53:38 : INFO: .... primeOptions: Option: 53, length:1 02/12 19:53:38 : INFO: .... primeOptions: Option: 57, length:2 02/12 19:53:38 : INFO: .... primeOptions: Option: 77, length:12 02/12 19:53:38 : INFO: .... primeOptions: Option: 60, length:19 02/12 19:53:38 : TRACE: .... identifiableClient: function entered 02/12 19:53:38 : TRACE: .... identifiableClient: Using htype, hlen and chaddr to id client 02/12 19:53:38 : TRACE: .... legibleRequest: function entered 02/12 19:53:38 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER 02/12 19:53:38 : TRACE: Packet from client 6-0x0000e5683796 was accepted by user exit verification processing. ............................ log truncated ........................... 02/12 19:53:39 : TRACE: ............ pr_check_subnet_movement: clue = 10.1.9.2 02/12 19:53:39 : TRACE: ............ pr_check_subnet_movement: Comparing requested ip 10.1.9.3 & subnetmask 255.255.255.0 against subnet 10.1.9.0 ........................... log truncated ............................ 02/12 19:53:40 : INFO: .......... am_addressClient: Client 6-0x0000e5683796 had 10.1.9.3 mapped previously ........................... log truncated ............................ ==========> DHCPOFFER to IBM Network Station <=============== 02/12 19:53:40 : INFO: .. generate_bootreply: Generating a DHCPOFFER reply ........................... log truncated ........................... 02/12 19:53:40 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/12 19:53:40 : TRACE: .. transmitMailbox: transmitting to (10.1.9.3 #68) Multiple Subnets and DHCP Servers 299 02/12 19:53:40 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. =========> DHCPREQUEST from IBM Network Station <============ 02/12 19:53:46 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/12 19:53:46 : TRACE: .. receiveMailbox: recvfrom got 548 bytes. 02/12 19:53:46 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/12 19:53:46 : TRACE: Size of incoming packet is: 548 02/12 19:53:46 : TRACE: .. process_bootrequest: function entered 02/12 19:53:46 : TRACE: .. process_bootrequest: received packet xid = b03 02/12 19:53:46 : INFO: .... primeOptions: Option: 53, length:1 02/12 19:53:46 : INFO: .... primeOptions: Option: 50, length:4 value: 167840003 (0x0a010903) 02/12 19:53:46 : INFO: .... primeOptions: Option: 54, length:4 value: 167840002 (0x0a010902) 02/12 19:53:46 : INFO: .... primeOptions: Option: 57, length:2 02/12 19:53:46 : INFO: .... primeOptions: Option: Parameter Request List, length:12 02/12 19:53:46 : INFO: .... primeOptions: Option 66 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 67 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 3 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 6 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 2 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 4 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 12 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 28 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 31 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 49 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 48 requested 02/12 19:53:46 : INFO: .... primeOptions: Option 15 requested 02/12 19:53:46 : INFO: .... primeOptions: Option: 77, length:12 02/12 19:53:46 : INFO: .... primeOptions: Option: 60, length:19 02/12 19:53:46 : TRACE: .... identifiableClient: function entered 02/12 19:53:46 : TRACE: .... identifiableClient: Using htype, hlen and chaddr to id client 02/12 19:53:46 : TRACE: .... legibleRequest: function entered 02/12 19:53:46 : TRACE: .... legibleRequest: DHCP msg type DHCPREQUEST 02/12 19:53:46 : TRACE: .. process_bootrequest: Request is self-consistent 02/12 19:53:46 : TRACE: Packet from client 6-0x0000e5683796 was accepted by user exit verification processing. ........................... log truncated ............................ ===========> DHCPACK to IBM Network Station <============= 02/12 19:53:46 : TRACE: .... processREQUEST: Offer was selected by client 6-0x0000e5683796 02/12 19:53:46 : TRACE: ...... addressManager: Function entered ........................... log truncated ............................. 02/12 19:53:46 : TRACE: .... processREQUEST: Address 10.1.9.3 has been bound to 6-0x0000e5683796 02/12 19:53:46 : INFO: .. generate_bootreply: Generating a DHCPACK reply 300 AS/400 TCP/IP DNS and DHCP Support ...........................log truncated............................. 02/12 19:53:46 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/12 19:53:46 : TRACE: .. transmitMailbox: transmitting to (10.1.9.3 #68) 02/12 19:53:46 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. ........................... log truncated............................. 13.3.8.2 DHCP Logs on AS5 Client PC x’0004ac946b53’ From Interface 10.1.0.2 ===============> DHCPDISCOVER from PC <================= 02/12 19:54:02 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/12 19:54:02 : TRACE: .. receiveMailbox: recvfrom got 300 bytes. 02/12 19:54:02 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/12 19:54:02 : TRACE: Size of incoming packet is: 300 02/12 19:54:02 : TRACE: .. process_bootrequest: function entered 02/12 19:54:02 : TRACE: .. process_bootrequest: received packet xid = 789f789f 02/12 19:54:02 : INFO: .... primeOptions: Option: 53, length:1 02/12 19:54:02 : INFO: .... primeOptions: Option: 61, length:7 02/12 19:54:02 : INFO: .... primeOptions: Option: 50, length:4 value: 167837956 (0x0a010104) 02/12 19:54:02 : INFO: .... primeOptions: Option: 12, length:5 02/12 19:54:02 : TRACE: .... identifiableClient: function entered 02/12 19:54:02 : TRACE: .... identifiableClient: DHCP option Client-identifier specified 02/12 19:54:02 : TRACE: .... legibleRequest: function entered 02/12 19:54:02 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER 02/12 19:54:02 : TRACE: .. process_bootrequest: Request is self-consistent 02/12 19:54:02 : TRACE: Packet from client 6-0x0004ac946b53 was accepted by user exit verification processing. ...................... log truncated ................................. 02/12 19:54:02 : TRACE: ........ pr_queryAddr: clue = [0x0a010104], 167837956 02/12 19:54:02 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/12 19:54:02 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.1.4 02/12 19:54:02 : TRACE: ...... locateConfiguredClient: look for client match in this subnet ...................... log truncated ................................. ===================> DHCPOFFER to PC <===================== 02/12 19:54:02 : INFO: generate_bootreply: Generating a DHCPOFFER reply ...................... log truncated ................................. 02/12 19:54:02 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/12 19:54:02 : TRACE: .. transmitMailbox: transmitting to (10.1.1.4 #68) Multiple Subnets and DHCP Servers 301 02/12 19:54:02 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. ==================> DHCPREQUEST from PC <================= 02/12 19:54:02 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/12 19:54:02 : TRACE: .. receiveMailbox: recvfrom got 300 bytes. 02/12 19:54:02 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/12 19:54:02 : TRACE: Size of incoming packet is: 300 02/12 19:54:02 : TRACE: .. process_bootrequest: function entered 02/12 19:54:02 : TRACE: .. process_bootrequest: received packet xid = b7a0b7a0 02/12 19:54:02 : INFO: .... primeOptions: Option: 53, length:1 02/12 19:54:02 : INFO: .... primeOptions: Option: 61, length:7 02/12 19:54:02 : INFO: .... primeOptions: Option: 50, length:4 value: 167837956 (0x0a010104) 02/12 19:54:02 : INFO: .... primeOptions: Option: 54, length:4 value: 167837698 (0x0a010002) 02/12 19:54:02 : INFO: .... primeOptions: Option: 12, length:5 02/12 19:54:02 : INFO: .... primeOptions: Option: Parameter Request List, length:7 02/12 19:54:02 : INFO: .... primeOptions: Option 1 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 3 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 15 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 6 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 44 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 46 requested 02/12 19:54:02 : INFO: .... primeOptions: Option 47 requested 02/12 19:54:02 : INFO: .... primeOptions: Option: 43, length:4 02/12 19:54:02 : TRACE: .... identifiableClient: function entered 02/12 19:54:02 : TRACE: .... identifiableClient: DHCP option Client-identifier specified 02/12 19:54:02 : TRACE: .... legibleRequest: function entered 02/12 19:54:02 : TRACE: .... legibleRequest: DHCP msg type DHCPREQUEST 02/12 19:54:02 : TRACE: .. process_bootrequest: Request is self-consistent 02/12 19:54:03 : TRACE: Packet from client 6-0x0004ac946b53 was accepted by user exit verification processing. ...................... log truncated ................................. 02/12 19:54:03 : TRACE: ..02/12 19:54:03 : TRACE: ........ pr_queryAddr: clue = [0x0a010002], 167837698 02/12 19:54:03 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/12 19:54:03 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.0.2 02/12 19:54:03 : TRACE: ...... pr_check_subnet_movement: Comparing requested ip 10.1.1.4 & subnetmask 255.255.254.0 against subnet 10.1.0.0 ...................... log truncated ................................. 02/12 19:54:03 : TRACE: ...... locateConfiguredClient: look for client match in this subnet 02/12 19:54:03 : TRACE: ...... locateConfiguredClient: look for client match in global clients 02/12 19:54:03 : TRACE: .... processREQUEST: Offer was selected by client 6-0x0004ac946b53 ...................... log truncated ................................. 302 AS/400 TCP/IP DNS and DHCP Support 02/12 19:54:03 : TRACE: .... processREQUEST: Address 10.1.1.4 has been bound to 6-0x0004ac946b53 ...................... log truncated ................................. ===================> DHCPACK to a PC <==================== 02/12 19:54:03 : INFO: .. generate_bootreply: Generating a DHCPACK reply ...................... log truncated ................................. 02/12 19:54:03 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/12 19:54:03 : TRACE: .. transmitMailbox: transmitting to (10.1.1.4 #68) 02/12 19:54:03 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. 02/12 19:54:03 : TRACE: .. processNotifyBindUsrExits: function entered 02/12 19:54:03 : TRACE: .. processNotifyBindUsrExits: Initiating user exit program ADDRESS-BIND notification processing. 13.3.9 Conclusion After analyzing the logs, you can determine that a better approach is having the AS1 function as the DHCP server for the whole network, servicing both subnets. Since AS1 is connected to both the networks and there is no way to shut down AS1’s interface on subnet B from listening to DHCP requests, it is more productive to have AS1 servicing the whole network. Subnet B was configured on the AS1 DHCP server as described in Section 13.3.5, “Configuring DHCP Server Support on AS5” on page 291. This scenario did not implement a DHCP backup server. One (provided you have enough IP addresses available) option is to configure a range of IP addresses on AS1 to service the clients on subnet B as a primary DHCP server. You can configure another range of subnet IP addresses on AS5 to use it as a backup DHCP server only for subnet B in the event that AS1 becomes unavailable. 13.4 Configuring Subnet B on AS1 To add subnet B to the DHCP server configuration on AS1, perform the following steps: You cannot configure the same range of IP addresses for the same subnet in the primary and back-up DHCP servers. Even if you start DHCP services on the back-up AS/400 system only if the primary server is unavailable, there is no control over what IP addresses from the range are already in use. The only way to guarantee that the back-up DHCP server does not give away already leased IP addresses is to configure totally different IP address ranges in both DHCP servers. Note Multiple Subnets and DHCP Servers 303 1. Use Operations Navigator to get to the DHCP configuration on AS1. 2. Right-click Global to open a context menu. 3. Select Add new subnet (see Figure 264). Figure 264. Adding Subnet B to AS1 4. The New DHCP subnet wizards window displays. Click Next. 5. Answer No to the question “Will this subnet manage twinax devices?” Click Next. 6. Select Define subnet based on entire subnet. Click Next. 7. Specify your subnet information as shown in Figure 265. Click Next. Figure 265. Define Subnet B -- 10.1.0.0 Based on Entire Subnet 304 AS/400 TCP/IP DNS and DHCP Support 8. Exclude the IP addresses of the hosts in this subnet that you have already configured, as shown in Figure 244 on page 279. Specify the values shown in Figure 266. Figure 266. Exclude IP Addresses Already Assigned to Hosts on Subnet B 9. Accept Inherit the server’s default lease time (12 hours). 10.Answer Yes to the question “Would you like the DHCP server to deliver gateway addresses to clients in this subnet?” 11.Specify the gateway IP address, which is the AS1 interface on this subnet (see Figure 267). Figure 267. Configuring Subnet B Gateway Information Click Next. Multiple Subnets and DHCP Servers 305 12.Answer Yes to the question “Would you like the DHCP server to deliver domain name server addresses to the clients in this subnet?” AS1 is the DNS server in the sample network. 13.Add the DNS IP address, which is the DNS server running on AS1 and AS5 as a secondary DNS (see Figure 268). Figure 268. Configuring DNS Information for Clients in Subnet B Click Next. 14.Answer No to the question “Would you like the DHCP server to deliver domain name server addresses to the clients in this subnet?” Click Next. 15.Answer No to the question “Would you like to set other options for this subnet?” 16.Answer Yes to the question “Do you want the DHCP server to start when TCP/IP starts?” 17.Answer No to the question “Do you want the DHCP server to start now?” 306 AS/400 TCP/IP DNS and DHCP Support 18.At the New DHCP Subnet Summary window, click Finish. Figure 269. New DHCP Subnet Summary -- Subnet B on AS1 19.To add the subnet mask option for the subnet, perform the following steps: 1. Right-click Subnet Subnet_B_10.1.0.0 to open the context menu. Select Properties. 2. Click Options. 3. Select tag 1, Subnet mask, and click Add. 4. In the Subnet Mask field, specify the following mask for all clients in subnet B: 255.255.254.0 See Figure 270. Figure 270. Adding the Subnet Mask Option for Subnet B Multiple Subnets and DHCP Servers 307 20. The DHCP server configuration window for As1.mycompany.com is displayed. Figure 271 shows subnet B options on the AS1 DHCP server. Figure 271. DCHCP Server -- As1.mycompany.com -- Subnet B Options Figure 272 shows the configuration of the PC client C1 on subnet B after receiving the configuration options from AS1. Figure 272. Client C1 on Subnet B after Receiving TCP/IP Configuration from DHCP Server AS1 308 AS/400 TCP/IP DNS and DHCP Support Section 13.4.0.1, “DHCP Log on AS1” on page 308, shows the DHCP Log in AS1 after configuring subnet B. This time, AS1 serves C1 through the 10.1.0.3 interface. 13.4.0.1 DHCP Log on AS1 Client PC x’0004ac946b53’ From Interface 10.1.0.3 ================> DHCP Server Startup <==================== ...................... log truncated ............................... : INFO: DHCP Server Initialized at Fri Feb 13 16:05:47 1998Ö ==> DHCPDISCOVER from the PC Client x’0004ac946b53’ <======== 02/13 16:10:08 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/13 16:10:08 : TRACE: .. receiveMailbox: recvfrom got 300 bytes. 02/13 16:10:08 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/13 16:10:08 : TRACE: Size of incoming packet is: 300 02/13 16:10:08 : TRACE: .. process_bootrequest: function entered 02/13 16:10:08 : TRACE: .. process_bootrequest: received packet xid = 85a085a 02/13 16:10:08 : INFO: .... primeOptions: Option: 53, length:1 02/13 16:10:08 : INFO: .... primeOptions: Option: 61, length:7 02/13 16:10:08 : INFO: .... primeOptions: Option: 50, length:4 value: 167837889 (0x0a0100c1) 02/13 16:10:08 : INFO: .... primeOptions: Option: 12, length:5 02/13 16:10:08 : TRACE: .... identifiableClient: function entered 02/13 16:10:08 : TRACE: .... identifiableClient: DHCP option Client-identifier specified 02/13 16:10:08 : TRACE: .... legibleRequest: function entered 02/13 16:10:08 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER 02/13 16:10:08 : TRACE: .. process_bootrequest: Request is self-consistent 02/13 16:10:08 : TRACE: Packet from client 6-0x0004ac946b53 was accepted by user exit verification processing. 02/13 16:10:08 : TRACE: .. reply_generator: function entered 02/13 16:10:08 : TRACE: .... processDISCOVER: function entered 02/13 16:10:08 : TRACE: ...... locateExchange: function entered 02/13 16:10:08 : TRACE: ...... newExchangeBlock: function entered 02/13 16:10:08 : TRACE: ...... locateConfiguredClient: function entered 02/13 16:10:08 : TRACE: ...... locateConfiguredClient: no ipaddress supplied, returning 02/13 16:10:08 : TRACE: ...... addressManager: Function entered 02/13 16:10:08 : TRACE: ........ am_queryClient: Function entered 02/13 16:10:08 : TRACE: .......... am_queryMapper: function Entered 02/13 16:10:08 : TRACE: ............ locateClientRecord: function Entered 02/13 16:10:08 : TRACE: ............ locateClientRecord: Located client 6-0x0004ac946b53 in client records Multiple Subnets and DHCP Servers 309 02/13 16:10:08 : WARNING:.......... am_queryMapper: Client 6-0x0004ac946b53 has no address mapped to it, status=2 02/13 16:10:08 : TRACE: ............ locateConfiguredClient: function entered 02/13 16:10:08 : TRACE: .............. pr_queryAddr: function entered 02/13 16:10:08 : TRACE: .............. pr_queryAddr: clue = [0x0a010003], 167837699 02/13 16:10:08 : TRACE: .............. pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:08 : TRACE: .............. pr_queryAddr: hostaddr = 0.1.0.3 02/13 16:10:08 : TRACE: ............ locateConfiguredClient: look for client match in this subnet 02/13 16:10:08 : TRACE: ............ locateConfiguredClient: look for client match in global clients 02/13 16:10:08 : TRACE: ........ am_queryClient: Client 6-0x0004ac946b53 is known to address mapper, status=2 02/13 16:10:08 : TRACE: .... processDISCOVER: binder.subnet [0x00000000] 02/13 16:10:08 : TRACE: .... processDISCOVER: AM_STATUS_AUTHENTIC 02/13 16:10:08 : TRACE: ...... isAddressInUse: Function Entered 02/13 16:10:10 : TRACE: ...... isAddressInUse: IP address 10.1.0.193, not in use. rc=-26758468 02/13 16:10:10 : TRACE: ...... pr_check_subnet_movement: function entered 02/13 16:10:10 : TRACE: ...... pr_check_subnet_movement: clue = 10.1.0.3 02/13 16:10:10 : TRACE: ........ pr_queryAddr: function entered 02/13 16:10:10 : TRACE: ........ pr_queryAddr: clue = [0x0a010003], 167837699 02/13 16:10:10 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:10 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.0.3 02/13 16:10:10 : TRACE: ...... pr_check_subnet_movement: Comparing requested ip 10.1.0.193 & subnetmask 255.255.254.0 against subnet 10.1.0.0 02/13 16:10:10 : TRACE: ...... addressManager: Function entered ...................... log truncated ............................... 02/13 16:10:10 : TRACE: ............ locateAddressRecord: function Entered 02/13 16:10:10 : INFO: .......... am_addressClient: Client 6-0x0004ac946b53 suggested 10.1.0.193 is in range 02/13 16:10:10 : INFO: .......... am_addressClient: Client 6-0x0004ac946b53 had no previous mapping, getting one 02/13 16:10:10 : TRACE: .......... indexAddressRecord: function Entered 02/13 16:10:10 : TRACE: .......... nonvolatilizeAR: function Entered 02/13 16:10:10 : TRACE: .......... nonvolatilizeCR: function Entered 02/13 16:10:10 : ACTION: .... processDISCOVER: Address 10.1.0.193 has been reserved 02/13 16:10:10 : TRACE: ...... pr_new_menu : Function entered 02/13 16:10:10 : TRACE: ...... pr_fill_menu_net: function entered 02/13 16:10:10 : TRACE: ........ pr_queryAddr: function entered 02/13 16:10:10 : TRACE: ........ pr_queryAddr: clue = [0x0a0100c1], 167837889 02/13 16:10:10 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:10 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.0.193 02/13 16:10:10 : TRACE: ........ locateAddressRecord: function Entered ...................... log truncated ............................... 02/13 16:10:10 : TRACE: ...... newReplyPacket: function entered 02/13 16:10:10 : TRACE: ...... enqueueExchange: function entered 02/13 16:10:10 : TRACE: .. generate_bootreply: function entered 310 AS/400 TCP/IP DNS and DHCP Support ====> Generating a DHCPOFFER to the Client x’0004ac946b53’ <==== 02/13 16:10:10 : INFO: generate_bootreply: Generating a DHCPOFFER reply 02/13 16:10:10 : TRACE: .... locateConfiguredClient: function entered 02/13 16:10:10 : TRACE: ...... pr_queryAddr: function entered 02/13 16:10:10 : TRACE: ...... pr_queryAddr: clue = [0x0a0100c1], 167837889 02/13 16:10:10 : TRACE: ...... pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:10 : TRACE: ...... pr_queryAddr: hostaddr = 0.1.0.193 02/13 16:10:10 : TRACE: .... locateConfiguredClient: look for client match in this subnet 02/13 16:10:10 : TRACE: .... locateConfiguredClient: look for client match in global clients ...................... log truncated ............................... 02/13 16:10:10 : TRACE: .... pr_queryAddr: function entered 02/13 16:10:10 : TRACE: .... pr_queryAddr: clue = [0x0a0100c1], 167837889 02/13 16:10:10 : TRACE: .... pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:10 : TRACE: .... pr_queryAddr: hostaddr = 0.1.0.193 02/13 16:10:10 : TRACE: .... locateAddressRecord: function Entered 02/13 16:10:10 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/13 16:10:10 : TRACE: transmitMailbox: transmitting to (10.1.0.193 #68) 02/13 16:10:10 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. ======> DHCPREQUEST from the Client x’0004ac946b53’ <======= 02/13 16:10:11 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/13 16:10:11 : TRACE: .. receiveMailbox: recvfrom got 300 bytes. 02/13 16:10:11 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/13 16:10:11 : TRACE: Size of incoming packet is: 300 02/13 16:10:11 : TRACE: .. process_bootrequest: function entered 02/13 16:10:11 : TRACE: .. process_bootrequest: received packet xid = b5b0b5b 02/13 16:10:11 : INFO: .... primeOptions: Option: 53, length:1 02/13 16:10:11 : INFO: .... primeOptions: Option: 61, length:7 02/13 16:10:11 : INFO: .... primeOptions: Option: 50, length:4 value: 167837889 (0x0a0100c1) 02/13 16:10:11 : INFO: .... primeOptions: Option: 54, length:4 value: 167837699 (0x0a010003) 02/13 16:10:11 : INFO: .... primeOptions: Option: 12, length:5 02/13 16:10:11 : INFO: .... primeOptions: Option: Parameter Request List, length:7 02/13 16:10:11 : INFO: .... primeOptions: Option 1 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 3 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 15 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 6 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 44 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 46 requested 02/13 16:10:11 : INFO: .... primeOptions: Option 47 requested 02/13 16:10:11 : INFO: .... primeOptions: Option: 43, length:4 02/13 16:10:11 : TRACE: .... identifiableClient: function entered 02/13 16:10:11 : TRACE: .... identifiableClient: DHCP option Client-identifier specified 02/13 16:10:11 : TRACE: .... legibleRequest: function entered 02/13 16:10:11 : TRACE: .... legibleRequest: DHCP msg type DHCPREQUEST Multiple Subnets and DHCP Servers 311 02/13 16:10:11 : TRACE: .. process_bootrequest: Request is self-consistent 02/13 16:10:11 : TRACE: Packet from client 6-0x0004ac946b53 was accepted by user exit verification processing. 02/13 16:10:11 : TRACE: .. reply_generator: function entered 02/13 16:10:11 : TRACE: .... processREQUEST: function entered ...................... log truncated ............................... 02/13 16:10:11 : TRACE: ........ pr_queryAddr: clue = [0x0a010003], 167837699 02/13 16:10:11 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:11 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.0.3 02/13 16:10:11 : TRACE: ...... pr_check_subnet_movement: Comparing requested ip 10.1.0.193 & subnetmask 255.255.254.0 against subnet 10.1.0.0 02/13 16:10:11 : TRACE: ...... locateConfiguredClient: function entered ...................... log truncated ............................... ======> Offer was selected by the Client x’0004ac946b53’ <====== 02/13 16:10:11 : TRACE: .... processREQUEST: Offer was selected by client 6-0x0004ac946b53 02/13 16:10:11 : TRACE: ...... addressManager: Function entered 02/13 16:10:11 : TRACE: ........ am_commit: Function entered 02/13 16:10:11 : TRACE: .......... locateClientRecord: function Entered 02/13 16:10:11 : TRACE: .......... locateClientRecord: Located client 6-0x0004ac946b53 in client records 02/13 16:10:11 : TRACE: .......... indexAddressRecord: function Entered 02/13 16:10:11 : TRACE: .......... nonvolatilizeAR: function Entered 02/13 16:10:11 : TRACE: .......... nonvolatilizeCR: function Entered 02/13 16:10:11 : TRACE: .... processREQUEST: Address 10.1.0.193 has been bound to 6-0x0004ac946b53 02/13 16:10:11 : TRACE: ...... pr_new_menu : Function entered 02/13 16:10:11 : TRACE: ...... pr_fill_menu_net: function entered 02/13 16:10:12 : TRACE: ........ pr_queryAddr: function entered 02/13 16:10:12 : TRACE: ........ pr_queryAddr: clue = [0x0a0100c1], 167837889 02/13 16:10:12 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:12 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.0.193 02/13 16:10:12 : TRACE: ........ locateAddressRecord: function Entered ...................... log truncated ............................... 02/13 16:10:12 : TRACE: .......... pr_queryAddr: clue = [0x0a0100c1], 167837889 02/13 16:10:12 : TRACE: .......... pr_queryAddr: netaddr = 10.0.0.0 02/13 16:10:12 : TRACE: .......... pr_queryAddr: hostaddr = 0.1.0.193 02/13 16:10:12 : TRACE: ........ locateConfiguredClient: look for client match in this subnet 02/13 16:10:12 : TRACE: ........ locateConfiguredClient: look for client match in global clients 02/13 16:10:12 : TRACE: ........ pr_queryAddr: function entered 02/13 16:10:12 : TRACE: ........ pr_queryAddr: clue = [0x0a0100c1], 167837889 312 AS/400 TCP/IP DNS and DHCP Support ...................... log truncated ............................... 02/13 16:10:12 : TRACE: ...... newReplyPacket: function entered =======> DHCPACK to the Client x’0004ac946b53’ <======= 02/13 16:10:12 : TRACE: .. generate_bootreply: function entered 02/13 16:10:12 : INFO: .. generate_bootreply: Generating a DHCPACK reply 02/13 16:10:12 : TRACE: .... locateConfiguredClient: function entered ...................... log truncated ............................... 002/13 16:10:12 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/13 16:10:12 : TRACE: .. transmitMailbox: transmitting to (10.1.0.193 #68) 02/13 16:10:12 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. 02/13 16:10:12 : TRACE: .. processNotifyBindUsrExits: function entered 02/13 16:10:12 : TRACE: .. processNotifyBindUsrExits: Initiating user exit program ADDRESS-BIND notification processing. 02/13 16:10:45 : TRACE: .. event_timeout: function entered 02/13 16:10:45 : TRACE: .. event_timeout: Garbage collection (every 60 seconds). 02/13 16:10:45 : TRACE: .... am_removeExpiredLeases: function Entered 02/13 16:10:45 : TRACE: .... update_statistic_list: function Entered 13.5 Summary This scenario showed through the DHCP logs that a DHCP server listens for DHCP packets on all its interfaces. This is true even if it is not configured to serve that range of IP addresses from which the DHCP packets are coming. You configured two DHCP servers, one per subnet in our network. You then changed to a single DHCP server that was connected to both subnets. This server serviced the whole network by configuring a range of IP addresses on each subnet. Although you did not implement DHCP server backup in this scenario, it discussed how to use AS5 to back up subnet B. © Copyright IBM Corp. 1998 313 Chapter 14. Multiple Subnets, DHCP Servers, and Relay Agents As your network grows, the number of subnets grows with it. You can still keep centralized configuration administration by using BOOTP/DHCP Relay Agent support in routers or AS/400 systems. This chapter shows you how to configure the AS/400 BOOTP/DHCP Relay Agent and how it works together with the AS/400 DHCP server. This chapter also introduces a Microsoft NT BOOTP/DHCP Relay Agent and briefly outlines how to configure the NT system. Inherently with TCP/IP networks, broadcast messages are not allowed to leave their own subnet or LAN. This is part of the TCP/IP architecture, and it ensures that the network does not become flooded with broadcast messages. A router or gateway that recognizes a broadcast message usually examines the packet to see if it is relevant. If it is not, it is discarded. The BOOTP/DHCP Relay Agent overcomes this problem by intercepting the broadcast message and forwarding the packet directly to a preconfigured destination address. Most routers today have BOOTP/DHCP Relay Agent support, and so does the AS/400 system in V4R2. The BOOTP/DHCP Relay Agent intercepts broadcast messages that arrive on port 67. It then places its own IP address in the DHCPDISCOVER packet and forwards it directly (unicast) to the DHCP server. The DHCP server forwards the DHCPOFFER directly back to the relay agent, and the relay agent broadcasts the offer back onto the network, where the client picks it up. You can define multiple DHCP servers to which the BOOTP/DHCP Relay Agent sends DHCP requests. The BOOTP/DHCP Relay Agent sends the same packet to each of the DHCP servers that you configure (multicast). You can configure the BOOTP/DHCP Relay Agent to send the DHCP message to another BOOTP/DHCP Relay Agent, which in turn forwards it to another BOOTP/DHCP Relay Agent, and so on. You can limit the number of hops that the DHCP message makes because routers do not increase the hop count. Only BOOTP/DHCP Relay Agents increase the hop count. The relay agent stops forwarding the message once it has reached the hop count. There is also a feature that allows you to delay the forwarding of a DHCP message by way of a BOOTP/DHCP Relay Agent. Consequently, you might bias a DHCP server by allowing it to respond without delay. This increases the length of time that it takes for the message to reach the backup DHCP server. 14.1 Scenario Overview This scenario provides DHCP services to clients that are connected to multiple LANs. The network is now more complex than the one discussed in Chapter 13, “Multiple Subnets and DHCP Servers” on page 277. Figure 273 on page 314 shows a high-level view of the network used in this scenario. Configure two BOOTP/DHCP Relay Agents. One is an AS/400 relay agent, and one is an NT relay agent. These relay agents intercept BOOTP and DHCP broadcasts and forward them to the DHCP server. You are not using BOOTP clients, but because both BOOTP and DHCP use the same port for incoming and 314 AS/400 TCP/IP DNS and DHCP Support outgoing transmissions, the relay agents perform the same task for both protocols without intervention. The AS/400 system AS1 is a primary DHCP server for all three subnets, and the DHCP server running on AS2 is a backup DHCP server for the same three subnets. Configure BOOTP/DHCP Relay Agent AS5 as a BOOTP/DHCP Relay Agent that always forwards to both DHCP servers, and place a delay on forwarding to the backup DHCP server. The NT server (BOOTP/DHCP Relay Agent R2) is also a BOOTP/DHCP Relay Agent on a different subnet that always forwards to the BOOTP/DHCP Relay Agent AS5. Figure 273. Multi-LAN and Multi-Subnet Network with DHCP Server, DHCP Relay Agents, and Routers Use an addressing scheme that allows all clients to attach even if one of the DHCP servers failed. The exception to this is subnet 2, which is split in a 70/30 manner. 14.1.1 Scenario Objectives This scenario has the following objectives: 1. Define and configure multiple DHCP servers. 2. Define and configure multiple BOOTP/DHCP Relay Agents. 3. Introduce a technique to delay DHCP messages reaching the backup DHCP server. 4. Provide both full and partial DHCP backups in the event of failure. Router DHCP Server AS1 DHCP address pool range: 50% of subnet 1 (primary) 70% of subnet 2 (primary) 50% of subnet 3 (primary) Subnet 1 Subnet 3 Subnet 2 C 2 C 3 C 1 BOOTP/DHCP relay agent AS5 Always relays to both DHCP servers BOOTP/DHCP relay agent R2 Always relays to AS5 DHCP Server AS2 DHCP address pool range: 30% of subnet 2 (backup) 50% of subnet 1 (backup) 50% of subnet 3 (backup) Primary DNS Secondary DNS Multiple Subnets, DHCP Servers, and Relay Agents 315 5. Show how the relay agent works with the DHCP server. 6. Use a multiple subnet network. 14.1.2 Scenario Advantages The advantages of this scenario include: • Full and partial DHCP backup support: This scenario shows a method that you can use for DHCP backup support by using the BOOTP/DHCP Relay Agent. The TCP/IP addressing schemes are a concern because they limit complete, full-DHCP backup support. A limited range was chosen to represent a realistic situation. • The ability to keep the network administration centralized: A backup DHCP server means that the DHCP configuration is split between two systems. If no DHCP backup is used, however, the BOOTP/DHCP Relay Agent forwards all messages to the single DHCP server, keeping the administration centralized. • The ease with which you can configure the AS/400 system as a BOOTP/DHCP Relay Agent: This is a simple task if you understand your network topology. • The flexibility of using a BOOTP/DHCP Relay Agent: The BOOTP/DHCP Relay Agent allows you to delay the arrival of DHCP messages to the backup DHCP server. This allows the primary DHCP server to respond first. • No need to change your router configuration to support DHCP or BOOTP in your network: Router configuration can be complex, and sometimes it requires a network outage. You can start, stop, and change the AS/400 relay agent without interrupting system availability. • Subnet growth within your network: As your company grows and you attach new networks to your AS/400 system, you do not need to purchase expensive network equipment. Note: While the AS/400 system in V4R2 performs many of the same tasks that dedicated, intelligent network nodes can perform, it is still a multi-user application system. As such, it requires system maintenance such as backups, storage reclamation, and periodic IPLs. Generally, routers require only a backup of the initial configuration. They then perform only the task for which they have been optimized. 14.1.3 Scenario Disadvantages The disadvantages of this scenario are that: • The NT BOOTP/DHCP Relay Agent (R2) is a single point of failure for subnet 3. • Unavailability of the AS/400 AS5 BOOTP/DHCP Relay Agent (R1) causes an outage for clients on subnet 2 and 3 trying to access the DHCP server AS1. • The network has many single point of failures, and there are no backups for routers between subnets. 316 AS/400 TCP/IP DNS and DHCP Support • You cannot run both the DHCP server and the BOOTP/DHCP Relay Agent on the same system simultaneously. 14.1.4 Scenario Network Configuration Figure 274 shows the network detail for this scenario. Note that the network 10.1.0.0 has a mask of 23 contiguous bits, allowing a range of 510 TCP/IP addresses. Figure 274. Scenario Network Topology The following scenario characteristics influence both the DHCP server and the BOOTP/DHCP Relay Agent configuration: • There are three physical network segments. • There are three subnets, one for each physical segment. • There is one multi-homed host (AS5). • There is a router in the network. • There are two DHCP servers (AS1 and AS2). • There is a primary DNS server (AS5) and a secondary DNS server (AS2). • There are two BOOTP/DHCP Relay Agents (AS5 and R2). • The network implements a class A TCP/IP addressing scheme, and one subnet is using a complex mask (255.255.254.0). • There is a primary DHCP server (AS1) and a backup DHCP server. Router As1.mycompany.com Primary DHCP Server Secondary DNS 10.1.3.0 mask 255.255.255.0 10.1.2.0 mask 255.255.255.0 C2 C3 C1 As5.mycompany.com Relay Agent Primary DNS R2.mycompany.com NT BOOTP/DHCP Relay Agent As2.mycompany.com Backup DHCP Server 10.1.0.0 mask 255.255.254.0 .2 .3 .4 .3 .1 .1 .2 Multiple Subnets, DHCP Servers, and Relay Agents 317 • There are no diskless workstations in the network (for example, IBM Network Stations). • The router is not configured to act as a BOOTP/DHCP Relay Agent. • AS5 performs gateway functions between subnet 1 and subnet 2. IP forwarding is enabled on AS5. 14.2 Task Summary The tasks required to complete this scenario do not include the building of line descriptions and TCP/IP interfaces on the AS/400 system. It is assumed that the TCP/IP configuration on the AS/400 system is up and running. The summary of tasks for this scenario is as follows: 1. Plan the TCP/IP addressing scheme. 2. Gather information to configure DHCP servers and BOOTP/DHCP Relay Agents. 3. Configure the primary DHCP server. 4. Configure the backup DHCP server. 5. Add routing information to both of the DHCP servers. 6. Configure the AS/400 BOOTP/DHCP Relay Agent. 7. Configure the Microsoft NT BOOTP/DHCP Relay Agent. 14.2.1 Planning the TCP/IP Addressing Scheme In a TCP/IP network with multiple subnets and TCP/IP address ranges, it is imperative to pay careful attention to the addressing scheme. This topic shows you the addressing scheme in detail. Use class A IP addresses from the Internet Assigned Numbers Authority (IANA) in your internal network. They cannot be routed through the Internet, but Class A provides you with good growth potential for the future. There are 250 clients on subnet 10.1.0.0 (subnet mask 255.255.254.0) with a total of 510 TCP/IP addresses in the range. You must split this range evenly between both DHCP servers to allow full fall-back support if one server fails. On subnet 10.1.2.0, there are 170 DHCP clients. This subnet supports up to 175 clients when the primary DHCP server is active. However, during fallback when the primary fails, the backup DHCP server only supports up to 76 DHCP clients. On the remote subnet (10.1.3.0), there are only 110 DHCP clients. You can provide full support during failure of one of the DHCP servers. Figure 19 details the IP addresses for each subnet and in which DHCP server pool they reside. The Net ID column is the network portion of the TCP/IP address for the subnet. The Subnet Mask column has the mask that you must apply to the subnet. The Host Range column is the TCP/IP address range to be used once the mask has been applied. 318 AS/400 TCP/IP DNS and DHCP Support The DHCP Server ID column lists the DHCP server that administers the IP address pool. The last column, labeled %, shows the percentage of the total host range assigned to the DHCP server. Table 19. TCP/IP Addressing and Allocation of IP Address Range by DHCP Server You need to exclude the IP addresses of the DHCP servers, BOOTP/DHCP Relay Agents, and the router from each relevant subnet range as in the following example: In the subnet pool 10.1.0.0 Exclude 10.1.0.1, 10.1.0.2, 10.1.0.3 and 10.1.0.4 Note:Exclude 10.1.0.1 for future use by router on this subnet. In the subnet pool 10.1.2.0 Exclude 10.1.2.1 and 10.1.2.3 In the subnet pool 10.1.3.0 Exclude 10.1.3.1 and 10.1.3.2 14.2.2 Gathering Information to Configure DHCP Servers and DHCP Relay Agents To use Operations Navigator DHCP configuration effectively, you need to know how you want to set up and manage your networks and subnets with DHCP. You also need to know what address range or ranges you want to use for leasing. You must decide which system is the DHCP server, which one is the BOOTP/DHCP Relay Agent, and which one performs DHCP backup functions. Further, you need to know which IP addresses to reserve for special hosts such as routers, DNS servers, and firewalls. It is useful to refer to a network diagram that shows the subnet masks and IP addresses for your networks, routers, and clients while you are configuring DHCP. The starting point in this scenario is the network diagram shown in Figure 274 on page 316. The information shown in the following tables is based on the network picture and other network data. 14.2.2.1 AS1 DHCP Server and Administered Subnets Information Table 20 shows some general information about AS1 as a TCP/IP host, while Table 21 provides more specific information about AS1 as a DHCP server. Table 20. Planning the Primary DHCP Server -- AS1 TCP/IP Information Net ID Subnet Mask Host Range DHCP Server ID % 10.1.0.0 255.255.254.0 0.0.0.1~0.0.0.254 As1.mycompany.com 50 10.1.2.0 255.255.255.0 0.0.0.1~0.0.0.178 As1.mycompany.com 70 10.1.3.0 255.255.255.0 0.0.0.1~0.0.0.127 As1.mycompany.com 50 10.1.0.0 255.255.254.0 0.0.1.1~0.0.1.254 As2.mycompany.com 50 10.1.2.0 255.255.255.0 0.0.0.179~0.0.0.254 As2.mycompany.com 30 10.1.3.0 255.255.255.0 0.0.0.128~0.0.0.254 As2.mycompany.com 50 Host Name As1 Description Primary DHCP server Domain Name mycompany.com IP Address 10 . 1 . 0 . 2 Multiple Subnets, DHCP Servers, and Relay Agents 319 Note: The Configuration Reference column in the following tables points to the place in the Operations Navigator DHCP server configuration where you can configure the particular parameter. You can specify many of these configuration options through the DHCP configuration wizard the first time you configure DHCP. Table 21. Planning the Primary DHCP Server AS1 -- DHCP Server Overview Mask 255.255.254.0 Line Description TRNLINE1 # Question Answer Configuration Reference 1 Is the BOOTP server already configured on your system? No DHCP configuration wizard 2 Do you want to migrate the BOOTP configuration to DHCP? N/A File -->Migrate BOOTP 3 What is the default lease time for this server? 24 hours Global-->Properties-->Leases 4 Start the DHCP server when TCP/IP starts? Yes Server Properties --> General 5 List the DHCP server IP interfaces that will be serving DHCP clients. 10.1.0.2 See network diagram. 6 List the subnets that will be administered by this DHCP server. 10.1.0.0 10.1.2.0 10.1.3.0 See subnet planning table 7 Do you want to add a new subnet to be administered by this server? Yes Global --> New Subnet - Basic Global-->New Subnet - Advanced See subnet planning table 8 Do you want to log DHCP server activity? Yes Server Properties --> Logging 9 Do you want the DHCP server to support any client from any subnet? Yes Server Properties --> Client Support 10 Do you want the DHCP server to support BOOTP clients? No Server Properties --> Client Support 11 Do you want the DHCP server to reject requests from specific clients (for example, for security reasons)? No Global->Properties-> Exclude Client 11 Can your DHCP clients (other than IBM Network Stations) identify the class they belong to? No 12 If answer to 11 is Yes, do you want to add a new class to serve the DHCP clients that belong to that class? N/A Global --> New Class 320 AS/400 TCP/IP DNS and DHCP Support Table 22 provides information about subnet 10.1.0.0 being administered by DHCP server AS1. Notice that AS1 administers 50% of the IP addresses available and that the rest is assigned to AS2, the backup DHCP server. Table 22. Planning the Subnet 10.1.0.0 Administered by AS1 from IP Interface 10.1.0.2 Table 23 provides information about subnet 10.1.2.0 being administered by DHCP server AS1. Notice that AS1 administers 70% of the IP addresses available and that the rest is assigned to AS2, the backup DHCP server. Table 23. Planning the Subnet 10.1.2.0 Administered by AS1 from IP Interface 10.1.0.2 # Question Answer Configuration Reference 1 Subnet name 10.1.0.0 Subnet Properties --> General 2 Subnet description Marketing Subnet Properties --> General 3 Subnet address 10.1.0.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.254.0 Subnet Properties --> Address Pool 5 Address range 10.1.0.1 10.1.0.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router x AS1 AS2 Description: Reserved for future router DNS/DHCP server backup DHCP server IP address: 10.1.0.1 10.1.0.2 10.1.0.3 Name: AS5 Description: DNS/DHCP Relay IP Address: 10.1.0.4 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.0.4 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.0.4 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.254.0 10.1.0.4 10.1.0.4 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference 1 Subnet name 10.1.2.0 Subnet Properties --> General 2 Subnet description Manufacturing Subnet Properties --> General 3 Subnet address 10.1.2.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.2.1 10.1.2.178 Subnet Properties --> Address Pool Multiple Subnets, DHCP Servers, and Relay Agents 321 Table 24 provides information about subnet 10.1.3.0 being administered by DHCP server AS1. Notice that AS1 administers 50% of the IP addresses available and that the rest is assigned to AS2, the backup DHCP server. Table 24. Planning the Subnet 10.1.3.0 Administered by AS1 from IP Interface 10.1.0.2 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router AS5 Description: Router to next subnet DNS/DHCP relay IP address: 10.1.2.1 10.1.2.3 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.2.1 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.255.0 10.1.2.1 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference 1 Subnet name 10.1.3.0 Subnet Properties --> General 2 Subnet description Research Subnet Properties --> General 3 Subnet address 10.1.3.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.3.1 10.1.3.127 Subnet Properties --> Address Pool 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool Name: Router R2 Description: Router to next subnet NT DHCP relay IP address: 10.1.3.1 10.1.3.2 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.3.1 Subnet Properties --> Options--> Option 3 (Router) # Question Answer Configuration Reference 322 AS/400 TCP/IP DNS and DHCP Support 14.2.2.2 AS2 DHCP Server and Administered Subnets Information Table 25 shows some general information about AS2 as a TCP/IP host, while Table 26 provides more specific information about AS2 as a DHCP server. Table 25. Planning the Primary DHCP Server -- AS2 TCP/IP Information Note: The Configuration Reference column in the following tables points to the place in the Operations Navigator DHCP server configuration where you can configure the particular parameter. You can specify many of these configuration options through the DHCP configuration wizard the first time you configure DHCP. Table 26. Planning the Primary DHCP Server AS2 -- DHCP Server Overview 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.255.0 10.1.3.1 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Host Name AS2 Description Backup server Domain Name mycompany.com IP Address (Interface) 10 . 1 . 0 . 3 Mask 255.255.254.0 Line Description TRNLINE1 # Question Answer Configuration Reference 1 Is the BOOTP server already configured on your system? No DHCP configuration wizard 2 Do you want to migrate the BOOTP configuration to DHCP? N/A File -->Migrate BOOTP 3 What is the default lease time for this server? 24 hours Global-->Properties-->Leases 4 Start the DHCP server when TCP/IP starts? Yes Server Properties --> General 5 List the DHCP server IP interfaces that will be serving DHCP clients. 10.1.0.3 See network diagram. 6 List the subnets that will be administered by this DHCP server. 10.1.0.0 10.1.2.0 10.1.3.0 See subnet planning table 7 Do you want to add a new subnet to be administered by this server? Yes Global --> New Subnet - Basic Global-->New Subnet - Advanced See subnet planning table 8 Do you want to log DHCP server activity? Yes Server Properties --> Logging 9 Do you want the DHCP server to support any client from any subnet? Yes Server Properties --> Client Support # Question Answer Configuration Reference Multiple Subnets, DHCP Servers, and Relay Agents 323 Table 27 provides information about subnet 10.1.0.0 being administered by DHCP server AS2. Notice that AS2 administers 50% of the IP addresses available and that the rest is assigned to AS1, the primary DHCP server. Table 27. Planning the Subnet 10.1.0.0 Administered by As2 from IP Interface 10.1.0.3 Table 28 provides information about subnet 10.1.2.0 being administered by the DHCP server AS2. Notice that AS2 administers 30% of the IP addresses available and that the rest is assigned to AS1, the primary DHCP server. 10 Do you want the DHCP server to support BOOTP clients? No Server Properties --> Client Support 11 Do you want the DHCP server to reject requests from specific clients (for example, for security reasons)? No Global->Properties-> Exclude Client 11 Can your DHCP clients (other than IBM Network Stations) identify the class they belong to? No 12 If answer to 11 is Yes, do you want to add a new class to serve the DHCP clients that belong to that class? N/A Global --> New Class 12 Clients with static addr./ spec. options? Yes/No Server->Popup->NewClient # Question Answer Configuration Reference 1 Subnet name 10.1.0.0 Subnet Properties --> General 2 Subnet description Marketing Subnet Properties --> General 3 Subnet address 10.1.0.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.254.0 Subnet Properties --> Address Pool 5 Address range 10.1.1.1 10.1.1.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool NONE. All the excluded hosts in this subnet fall outside the range of addresses administered by this server. 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.0.4 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.0.4 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.254.0 10.1.0.4 10.1.0.4 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference 324 AS/400 TCP/IP DNS and DHCP Support Table 28. Planning the Subnet 10.1.2.0 Administered by AS2 from IP Interface 10.1.0.3 Table 29 provides information about subnet 10.1.3.0 being administered by DHCP server AS1. Notice that AS2 administers 50% of the IP addresses available and that the rest is assigned to AS1, the primary DHCP server. Table 29. Planning the Subnet 10.1.3.0 Administered by AS2 from IP Interface 10.1.0.3 # Question Answer Configuration Reference 1 Subnet name 10.1.2.0 Subnet Properties --> General 2 Subnet description Manufacturing Subnet Properties --> General 3 Subnet address 10.1.2.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.2.179 10.1.2.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool NONE. All the excluded hosts in this subnet fall outside the range of addresses administered by this server. 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.2.1 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.255.0 10.1.2.1 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> # Question Answer Configuration Reference 1 Subnet name 10.1.3.0 Subnet Properties --> General 2 Subnet description Research Subnet Properties --> General 3 Subnet address 10.1.3.0 Subnet Properties --> Address Pool 4 Subnet mask 255.255.255.0 Subnet Properties --> Address Pool 5 Address range 10.1.3.128 10.1.3.254 Subnet Properties --> Address Pool 6 Lease time Inherit from server (24 hours) Subnet Properties -->Leases 7 Exclusions (exclude hosts that required a particular IP address and are manually configured). Subnet Properties --> Address Pool NONE. All the excluded hosts in this subnet fall outside the range of addresses administered by this server. Multiple Subnets, DHCP Servers, and Relay Agents 325 Table 30 shows the information that is necessary to configure BOOTP/DHCP Relay Agent AS5. Table 30. Planning the BOOTP/DHCP Relay Agent -- AS5 Table 31 shows the information that is necessary to configure BOOTP/DHCP Relay Agent R2, which runs on an NT server. Table 31. Planning DHCP Relay Agent R2 8 Domain Name server IP address to deliver to clients in this subnet. 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Option 6 (Domain name server) 9 Gateway IP address to deliver to clients in this subnet. 10.1.3.1 Subnet Properties --> Options--> Option 3 (Router) 10 Offer options to client in this subnet 01 - Subnet mask 03 - Router 06 - Domain name server 255.255.255.0 10.1.3.1 10.1.2.3 10.1.0.2 Subnet Properties --> Options--> Host Name AS5 Description BOOTP/DHCP Relay Agent Domain Name mycompany.com Interface to accept DHCP packets 10 . 1 . 2 . 3 Destination server / relay agent 10 . 1 . 0 . 2 - AS2 Maximum number hops to DHCP server 4 Packet transm. delay (ms) 0 Interface to accept DHCP packets 10 . 1 . 2 . 3 Destination server / relay agent 10 . 1 . 0 . 3 Maximum hops 4 Packet transm. delay (ms) 5000 Host Name R2 Description NT BOOTP/DHCP Relay Agent Domain Name mycompany.com Interface to accept DHCP packets 10 . 1 . 3 . 2 Destination server / relay agent 10 . 1 . 2 . 3 Maximum hops 4 Seconds threshold 0 # Question Answer Configuration Reference 326 AS/400 TCP/IP DNS and DHCP Support 14.2.3 Configure the Primary DHCP Server (AS1) It is assumed that this is the first time you are configuring the DHCP server. Therefore, the Operations Navigator DHCP server configuration wizard starts. 1. Start Operations Navigator in your workstation. 2. Click As1.mycompany.com to select the system. Figure 275. AS/400 Operations Navigator -- Selecting the System to Configure DHCP Server 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. 7. Click Next. 8. Select Yes to add a new subnet to the DHCP server. 9. Leave the Twinax IP workstation controller address box blank and click Next. 10.Define the range of addresses to use within the subnet. If the DHCP configuration wizard is not shown, it is likely that a DHCP configuration already exists. To start the wizard and replace the existing configuration, select File > New Configuration. Note The following steps do not reflect the exact sequence of prompts that you see during the DHCP configuration. Only those configuration parameters that are the most relevant to this scenario are included. Note Multiple Subnets, DHCP Servers, and Relay Agents 327 Figure 276. IP Address Range for Subnet 1 (10.1.0.0) on AS1 Define a lease time for the client to keep the address served. Click Next to use the default lease time of one day. 11.Exclude the IP address permanently assigned to servers and routers (see Figure 277). Figure 277. Exclude IP Address in 10.1.0.0 Subnet -- AS1 DHCP Server 328 AS/400 TCP/IP DNS and DHCP Support 12.Specify the Gateway information for this subnet (see Figure 278). Figure 278. Subnet 10.1.0.0 Gateway Configuration 13.Specify the DNS IP address (see Figure 279) and click Next. Figure 279. DNS Configuration for Clients in 10.1.0.0 Subnet -- AS1 DHCP Server 14.Answer No to the question "Would you like the DHCP server to deliver domain name to clients in this subnet?" Click Next. 15.Check Support any clients form this subnet. 16.Answer No to the question "Would you like to set other options for this subnet?" Click Next. 17.Select Yes to start the DHCP server when TCP/IP starts, and select No to start the DHCP server now. Click Next. 18.The DHCP configuration summary window shows all the options that you have selected so far. Click Finish. Now the DHCP server configuration is displayed. Multiple Subnets, DHCP Servers, and Relay Agents 329 19.Add the subnet mask. To use this subnet, right-click subnet 10.1.0.0 to open a context menu and select Properties. 20.Click the Options tab at the top of the dialog. 21.Highlight option number 1, Subnet mask, and click Add. 22.Specify the mask value to use for this subnet in the field at the bottom of the dialog. The mask to specify is 255.255.254.0. 23.Click OK. The next step is to add the other two subnets to the primary DHCP server, As1.mycompany.com. 24.From the DHCP Server Configuration window, right-click Global to open a context menu and New>Subnet-Advanced (see Figure 280). Figure 280. AS1 DHCP Server Configuration -- Adding Subnet 10.1.2.0 25.Ensure the General tab is selected and specify the network ID (for documentation purposes only) in the field labeled Name (see Figure 281). 26.Place a description of the subnet in the Description field (see Figure 281). 330 AS/400 TCP/IP DNS and DHCP Support Figure 281. Subnet Properties -- Adding the Name and Description for Subnet #2 (10.1.2.0) 27.Click the Address pool tab. 28.Click Range to assign and specify the second IP address range, As1.mycompany.com from Table 19 on page 318. 29.Click Add and exclude the IP addresses of the routers As1.mycompany.com and As5.mycompany.com shown in Figure 282. Figure 282. 10.1.2.0 Subnet Address Range and Exclusions 30.Click the Options tab to add a subnet mask to serve to the clients. 31.Highlight option 1, subnet mask, from the Available options window and click Add. Multiple Subnets, DHCP Servers, and Relay Agents 331 32.Specify the appropriate subnet mask for the clients to use in the Subnet mask window at the bottom of the display. In this example, specify 255.255.255.0. 33.Highlight option 3, router, from the Available options panel and then click Add. 34.Specify the appropriate router information for the subnet’s clients. In this example, specify 10.1.2.1. 35.Highlight option 6, Domain name server, from the Available options dialog and then click Add. 36.Specify the appropriate DNS information for the subnet’s clients. In this example, specify 10.1.2.3 and 10.1.0.2. 37.Click OK. Figure 283 shows the options configured for subnet 10.1.2.0 on DHCP server AS1. Figure 283. Subnet 10.1.2.0 Options on AS1 DHCP Server Repeat steps 25 through 37 to add the third subnet pool range to As1.mycompany.com. Figure 284 on page 332 shows an example of the pool range for subnet 3 (10.1.3.1) on As1.mycompany.com. 332 AS/400 TCP/IP DNS and DHCP Support Figure 284. Subnet 10.1.3.0 IP Address Range and Exclusions Figure 285 shows the configuration options for clients on subnet 10.1.3.0. Figure 285. Subnet 10.1.3.0 Configuration Options -- AS1 DHCP Server 14.2.4 Configure the Backup DHCP Server (AS2) The backup DHCP server is As2.mycompany.com. The steps to configure the backup DHCP server are the same as those that you used to configure the primary DHCP server in “Configure the Primary DHCP Server (AS1)” on page 326. The only difference between the two sets of steps is the TCP/IP address range that you use on the backup DHCP server. This address range for the backup DHCP server must be different from the primary DHCP servers range. Multiple Subnets, DHCP Servers, and Relay Agents 333 Use Table 19 on page 318 to decide the range of IP addresses to use in each subnet for the backup DHCP server. Samples of the subnet properties windows are provided for each of the subnets on the backup DHCP server. Figure 286 on page 333 shows the range of IP addresses for the network 10.1.0.0 with a mask of 255.255.254.0 that you can use on the backup DHCP server. Figure 287 on page 334 shows the range of addresses for the network 10.1.2.0 that you can use on the backup DHCP server. Figure 288 on page 334 shows the range of addresses for the network 10.1.3.0 that you can use on the backup DHCP server. The mask, router, and DNS information to be delivered to clients on the three subnets is the same as those in the primary DHCP server, AS1. Figure 286. Backup Subnet #1 (10.1.0.0) IP Address Range Properties No addresses are excluded in the examples for the backup DHCP server. This is because the upper range of the TCP/IP addresses is used for the subnets on the DHCP backup server. The IP addresses that need to be excluded do not fall into the ranges defined on the backup DHCP server. They cannot be excluded. Note 334 AS/400 TCP/IP DNS and DHCP Support Figure 287. Backup Subnet #2 (10.1.2.0) IP Address Range Properties Figure 288. Backup Subnet #3 (10.1.3.0) IP Address Range Properties 14.2.5 Configure Routing Information on Both DHCP Servers The BOOTP/DHCP Relay Agent configures the IP address on the interface that is listening for broadcast DHCP messages. The BOOTP/DHCP Relay Agent then places this address into the packet that it forwards to the DHCP server. The DHCP server uses this address as a clue to select the correct address pool from which to offer an IP address to the client. The DHCP server sends all DHCP replies directly to this address. Once the client receives the DHCPOFFER, all communication after that is directly between the client and the DHCP server. The DHCP client sends the DHCPREQUEST directly to the DHCP server. The DHCP server then responds directly to the client with the DHCPACK. Multiple Subnets, DHCP Servers, and Relay Agents 335 Because the client IP address is not on the same subnet to which the DHCP server is connected, either the DHCP server must have a route configured with next hop information or a routing protocol such as RIP must be running within the network. Alternatively, if you have active routers in your network, you must configure them to provide a route to the remote subnet where the clients generating the DHCP broadcasts reside. In this scenario, RIP is not running on the AS/400 hosts, and subnets 10.1.0.0 and 10.1.2.0 are not joined by a router. You must configure a static route within TCP/IP to ensure the DHCP servers know where to send the DHCP replies. To configure routing information, perform the following steps: 1. From the AS/400 command line, specify CFGTCP and press Enter. 2. Select option 2, Work with TCP/IP routes, and press Enter. 3. Specify a 1 to add routing information (see Figure 289). Figure 289. Routing Information on AS1 DHCP Server Required to Send Replies Directly to Clients 4. Repeat these steps for both DHCP servers. Ensure that you have set the Preferred Interface parameter correctly to 10.1.0.3 on As2.mycompany.com. 5. Test connectivity by pinging the remote subnet interface on the BOOTP/DHCP Relay Agent. On As1.mycompany.com and As2.mycompany.com, specify the following command: PING (’10.1.2.3’) You should receive a reply back. Work with TCP/IP Routes System:As1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display Route Subnet Next Preferred Opt Destination Mask Hop Interface _ _______________ _______________ _______________ _ 10.1.2.0 255.255.255.0 10.1.0.4 10.1.0.2 Bottom F3=Exit F5=Refresh F6=Print list F11=Display type of service F12=Cancel F17=Top F18=Bottom 336 AS/400 TCP/IP DNS and DHCP Support 14.2.6 Configuring a BOOTP/DHCP Relay Agent It is simple to configure the AS/400 system to act as a BOOTP/DHCP Relay Agent. Remember that you cannot run the BOOTP/DHCP Relay Agent and the DHCP server on the same system simultaneously. In this scenario, you are configuring AS/400 system AS5 as a BOOTP/DHCP Relay Agent to forward DHCP messages directly and without delay to the primary DHCP server, AS1. You are also configuring the AS/400 system to send the same DHCP messages to the backup DHCP server but also to delay the arrival of the message. This is done to ensure that the primary DHCP server replies first. To configure the AS/400 BOOTP/DHCP Relay Agent, perform these steps: 1. Sign on to an AS/400 command entry session and specify the following command: CHGDHCPA MODE(*RELAY) Press Enter. This changes the mode of the DHCP server so that it can function as a BOOTP/DHCP Relay Agent. 2. From Operations Navigator, select As5.mycompany.com>Network>Servers>OS400, and double-click BOOTP/DHCP Relay Agent (see Figure 290). Figure 290. AS/400 Operations Navigator -- Starting BOOTP/DHCP Relay Agent 3. The BOOTP/DHCP Relay Agent properties window appears. Click Start when TCP/IP is started to ensure that it is checked. 4. Click Add. 5. Using the pull-down option on the Interface address field at the top of the window, select the TCP/IP interface on which the DHCP broadcast message arrives. This scenario uses 10.1.2.3. Multiple Subnets, DHCP Servers, and Relay Agents 337 6. Specify the IP address of the primary DHCP server to which the DHCP messages are sent. This scenario uses 10.1.0.2, the address on the primary DHCP server. Refer to Figure 291 on page 337. Note: You can specify the system name if your DNS server can resolve the IP address or if you have correctly configured your host table. 7. Leave the Packet transmission delay value at zero. 8. Leave the Maximum hops value set to the default of 4. Figure 291. BOOTP/DHCP Relay Agent Definitions 9. Click OK. 10.You are returned to the BOOTP/DHCP Relay Agent Properties dialog. Click Add to add the backup DHCP server information to the relay configuration. 11.Leave the Interface address pull-down menu on 10.1.2.3. If it is not already at this value, select this interface again. 12.In the Server IP address field, specify 10.1.0.3 as the IP address of the backup DHCP server (see Figure 292 on page 338). 13.Change the Packet transmission delay value to 5000 milliseconds. This delays the forwarding of the DHCP messages to the backup DHCP server by 5 seconds. 14.Leave the Maximum hops value set to the default of 4. 338 AS/400 TCP/IP DNS and DHCP Support Figure 292. Relay Forwarding Configuration to the Backup DHCP Server 15.Click OK. 16.Click OK. Figure 293 shows that any DHCP messages arriving on the 10.1.2.3 interface are forwarded to 10.1.0.2, the primary DHCP server. The BOOTP/DHCP Relay Agent forwards the packets to the backup DHCP server (interface 10.1.0.3) with a 5-second delay. Figure 293. BOOTP/DHCP Relay Agent Configuration 14.2.7 Configure the Microsoft NT BOOTP/DHCP Relay Agent In this scenario, the Windows NT BOOTP/DHCP Relay Agent is located on the far side of a routed network. You must configure the NT BOOTP/DHCP Relay Agent to always forward each DHCP message to the AS/400 BOOTP/DHCP Relay Agent, As5.mycompany.com. Multiple Subnets, DHCP Servers, and Relay Agents 339 To configure the NT server to act as a BOOTP/DHCP Relay Agent, perform the following steps: 1. Double-click My Computer on the desk top. 2. Double-click Control Panel. 3. Double-click Networks. 4. Click the Services tab. 5. Click Add. 6. Select BOOTP/DHCP Relay Agent from the list and click OK. Insert the appropriate Windows NT installation CDROM. 7. Click OK. 8. Click the Protocols tab. 9. Right-click TCP/IP to open a context menu and select Properties. 10.Click the DHCP Relay tab. 11.Change the Seconds threshold value to zero seconds. 12.Leave the Maximum hops value field at 4. Figure 294. Windows NT BOOTP/DHCP Relay Agent Configuration 13.Click Add. 340 AS/400 TCP/IP DNS and DHCP Support 14.Specify 10.1.2.3 as the IP address of the AS/400 BOOTP/DHCP Relay Agent. 15.Click Add. 16.Click OK. 17.Select Yes to shut down the NT server. The NT BOOTP/DHCP Relay Agent is now configured to send DHCP messages from subnet 10.1.3.0 to the AS/400 BOOTP/DHCP Relay Agent As5.mycompany.com. It is also configured for forwarding to both DHCP servers. 14.2.8 Start the DHCP Servers and BOOTP/DHCP Relay Agents The first time you start the DHCP servers and BOOTP/DHCP Relay Agents, you must perform the start-up procedure in an ordered manner. Because DHCP clients remember and attempt to gain the same IP address they last used, you must start the primary DHCP server first. Follow this by starting the primary AS/400 BOOTP/DHCP Relay Agent and then the NT BOOTP/DHCP Relay Agent. Start the backup DHCP server last. Note: If you receive an error message when attempting to start either the DHCP server or the BOOTP/DHCP Relay Agent, ensure that neither the BOOTP server nor the DHCP server is running while you start the relay agent. The order to start the systems in listed format is as follows: 1. Start the Primary DHCP server, As1.mycompany.com. From Operations Navigator, right-click DHCP to open a context menu and select Start (see Figure 295 on page 341). Alternatively, on an AS/400 command entry display, you can enter the following command: STRTCPSVR SERVER(*DHCP) 2. Start the AS/400 BOOTP/DHCP Relay Agent, As5.mycompany.com. From Operations Navigator, right-click on BOOTP/DHCP relay agent to open a context menu and select Start, (see Figure 296). Alternatively, on an AS/400 command entry display, you can enter the command: STRTCPSVR SERVER(*DHCP) 3. Start the Windows NT BOOTP/DHCP Relay Agent, R2.mycompany.com. Click OK. 4. Start the backup DHCP server, As2.mycompany.com. From Operations Navigator, right-click DHCP to open a context menu and select Start (see Figure 296 on page 341). Alternatively, on an AS/400 command entry display, you can enter the command: STRTCPSVR SERVER(*DHCP) Multiple Subnets, DHCP Servers, and Relay Agents 341 Figure 295. Starting the Primary DHCP Server Figure 296. Starting the BOOTP/DHCP Relay Agent on As5.mycompany.com 14.3 Summary This scenario configured a multi-subnetted network. It provided complete fall-back support in the event that the primary DHCP server fails for subnets 10.1.0.0 and 10.1.3.0. Partial support of approximately 30% for the subnet 10.1.2.0 was provided. This was due mainly to the addressing scheme and the use of the 70/30 split technique. On DHCP servers AS1 and AS2, you configured a TCP/IP route to reach the remote subnets through AS5. 342 AS/400 TCP/IP DNS and DHCP Support You configured an AS/400 BOOTP/DHCP Relay Agent to forward DHCP messages to both the primary and backup DHCP servers. You also biased the primary by making a delay when sending messages to the back-up DHCP server. Using this method allowed the primary DHCP server to respond first to the client. You also configured an NT BOOTP/DHCP Relay Agent on a remote subnet joined by a router or gateway that forwards DHCP messages from subnet 10.1.3.0 to the AS/400 BOOTP/DHCP Relay Agent. In the event that the primary DHCP server fails, no change is required in the configuration of the DHCP servers and DHCP relay agents. If the NT BOOTP/DHCP Relay Agent fails, the clients on the subnet 10.1.3.0 are unable to connect. © Copyright IBM Corp. 1998 343 Chapter 15. Configuring Twinax IBM Network Station with DHCP The BOOTP protocol was developed for bootstrapping, and it is the predecessor of DHCP. Previous chapters showed how to provide initial configuration to LAN-attached IBM Network Stations using the DHCP server. This chapter describes how to use DHCP to configure twinax-attached IBM Network Stations. It also introduces the concepts of transparent subnetting and Proxy ARP. This concept is necessary to understand routing concepts for DHCP clients that are attached to twinax workstation controllers on an AS/400 system. With V4R2 comes the ability to run the TCP/IP protocol encapsulated within Twinaxial Data Link Control (TDLC) frames. This gives you the ability to replace 5250 type devices, which are attached through the local workstation controller with IBM Network Stations without having to change your investment in cabling. The IBM Network Station gives users the ability to access your intranet or the Internet while still keeping the same 5250 emulation displays to which they are accustomed. The twinax-attached IBM Network Stations coexist with other non-TCP/IP 5250 devices on the same controller. TCP/IP over twinax is introduced in V4R2 to support twinax-attached IBM Network Stations 8361 Model 341. IBM Network Station Manager release 3.0 (5648-C05) is required. There are a number of distance limitations on any twinax workstation controller when used in express mode. Refer to the following URLs for further information: www.networking.ibm.com/525xpres/525xwire.html or www.networking.ibm.com/525xpres/525xpress.html for Express support. No previous releases of OS/400 or Network Station Manager support the IBM Network Station model 341. To accommodate the TCP/IP twinax subnet into your addressing scheme and to allow the twinax subnet access to the LAN and beyond, the AS/400 system utilizes a concept called transparent subnet masking. The implementation is based on RFC 1027, "Using ARP to Implement Transparent Subnet Gateways." All hosts that implement transparent subnetting use a variable length mask to identify the different subnets. The DHCP configuration in Operations Navigator is twinax aware. From a DHCP administrator’s point of view, it is easy to configure the twinax-attached IBM Network Stations if you understand the addressing structure of the network. 15.1 Getting Started: Basic IP over Twinax Configuration This scenario shows how to get started with IP over twinax. It demonstrates how to configure a simple environment where one AS/400 system has IBM Network Stations attached by way of twinax. The network in this scenario is mainly SNA, and the only TCP/IP connection besides the twinax-attached IBM Network Stations is a connection to the Internet by way of a firewall. The IBM Network Stations on the twinax network are used for 5250 emulation to the attached host and for Web browsing through the firewall. 344 AS/400 TCP/IP DNS and DHCP Support 15.1.1 Scenario Overview This scenario has one AS/400 system with one TCP/IP connection to a firewall. All other network connectivity to the host is through SNA. This scenario greatly simplifies the TCP/IP addressing considerations that you encounter in a TCP/IP-based network. The IBM Network Stations are used to run 5250 emulation to the attached host and for Internet access, such as Web browsing. This scenario does not discuss the firewall configuration. Figure 297. Basic IP over Twinax Configuration -- Scenario Overview 15.1.2 Scenario Objectives The objectives of this scenario are as follows: • Configure twinax-attached IBM Network Stations. • Configure DHCP server to support the twinax subnet. • Allow the twinax-attached IBM Network Stations connectivity to the Internet. 15.1.3 Scenario Advantages The advantages of this scenario are that it: • Is simple to implement. • Allows Internet access for the twinax-attached devices with minor configuration changes. • Requires few TCP/IP address considerations and planning steps. Configuring Twinax IBM Network Station with DHCP 345 15.1.4 Scenario Disadvantages The disadvantages of this scenario are that: • No consideration has been given to the future growth of the TCP/IP network. • The IBM Network Stations are unable to directly access the host, AS5. They are required to connect to AS2 and then pass through (STRPASTHR) to the host, AS5. • Only TCP/IP-attached devices gain Internet Web access. 15.1.5 Scenario Network Configuration Figure 298 shows the logical network topology of the simple TCP/IP network. None of the SNA network nodes or control points are shown. Configure the twinax subnet to use a small portion of the 10.1.1.0 address space. Figure 298. Logical Network Topology of the TCP/IP Network Only 15.1.6 Task Summary The following list is a high-level view of the tasks required to implement this scenario: 1. Define a TCP/IP address range to use on the twinax subnet. 2. Configure and start the DHCP server on AS2 to support the twinax subnet. 3. Start the IBM Network Station. 15.1.7 Define a TCP/IP Address Range Use an IP address range on your twinax subnet that is a subset of the overall network address 10.1.1.0. This automatically gives the twinax-attached IBM Network Stations connectivity to the firewall for incoming and outgoing network traffic. This method of using a chunk of address space from the LAN utilizes Proxy ARP and is called transparent subnet masking. For more information on transparent subnetting and Proxy ARP, refer to Section 15.2, “Transparent Subnet Masking” on page 352. 346 AS/400 TCP/IP DNS and DHCP Support The addresses used on the twinax subnet must be contiguous. You can assign a maximum of 64 contiguous addresses. Use the IP address range of 10.1.1.192 through 10.1.1.254 on the twinax subnet. You must ensure that the addresses contained in this range are not used anywhere else within the network. 15.1.8 Configure and Start the DHCP Server on AS2 You are configuring DHCP on a system without an existing configuration. Operations Navigator automatically starts the DHCP Configuration Wizard, which helps create a basic, twinax DHCP server configuration. To start the DHCP configuration wizard, perform the following steps: 1. Start Operations Navigator. 2. Click As2.mycompany.com to select the system. Figure 299. Operations Navigator -- Selecting the System to Configure DHCP Server 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP to start the DHCP configuration wizard. If you want to reset an existing configuration and start over, select File -> New Configuration from Operations Navigator. Tip Configuring Twinax IBM Network Station with DHCP 347 Figure 300. The DHCP Configuration Wizard 7. Click Next. 8. Select Yes to add a new subnet to the DHCP server. 9. Answer Yes to the question "Will this subnet manage twinax devices?" 10.Specify 10.1.1.193 as the IP address to use for the twinax controller (see Figure 301). Click Next. The first usable address in the subnet defined for the twinax network should be used for the workstation controller. 10.1.1.192 is a subnet boundary and cannot be used to address a device. If the DHCP configuration wizard does not appear, it is likely that a DHCP configuration already exists. To start the wizard and replace the existing configuration, select New Configuration from the File menu. Note 348 AS/400 TCP/IP DNS and DHCP Support Figure 301. Specify the IP Address to be Used by the Workstation Controller 11.Specify the subnet name, description, and the range of addresses to use within the subnet (in this case, specify all 64). Supply a subnet mask (see Figure 304). Figure 302. Twinax Subnet Configuration 12. Specify a lease time for the client to keep the address served. Specify the lease time as never expire. Selecting a large lease renewal value reduces the lease renewal request traffic on your twinax network. 13.Click Yes to have the DHCP server deliver the IP address of the firewall Domain Name Server to the twinax-attached IBM Network Stations. 14.Click Add and specify the address 10.1.1.30, which is the firewall’s secure port IP address. Note: To resolve host names in the Internet name space, the IBM Network Station client must use the firewall DNS. No internal DNS is available in the secure network for this scenario. Configuring Twinax IBM Network Station with DHCP 349 15.Click Next. 16.Answer No to the question "Would you like to set other options for this subnet?" These are added later. Click Next. 17.Select Yes to start the DHCP server when TCP/IP starts, and Select No to start the DHCP server now. Click Next. 18.The DHCP configuration summary window shows all the options that you have selected so far. Click Finish. Figure 303. DHCP Configuration Summary 19.Now the DHCP server configuration is displayed. Right-click the subnet called LocalTwinax to open a context menu and select Properties (see Figure 304). Figure 304. DHCP Server Configuration 20.Ensure that the General tab is selected. Click the check box labeled Twinax subnet. 350 AS/400 TCP/IP DNS and DHCP Support 21.Specify 10.1.1.193 as the IP address of the workstation controller (see Figure 305). Figure 305. Adding a Twinax Subnet in the DHCP Configuration You must also serve the subnet mask and other options to the clients. 22.Click the Options tab to add a subnet mask that is served to the clients. 23.Highlight option 1, subnet mask, from the Available options window and click Add. 24.Specify the mask 255.255.255.192 for the clients to use in the Subnet mask window at the bottom of the display (see Figure 306). 25.Highlight option 3, Router, from the Available options window and click Add (see Figure 306). 26.Specify the router or gateways IP address. The gateway for the twinax subnet is the workstation controller at IP address 10.1.1.193 (see Figure 306). 27.Highlight option 66, Server name, from the Available options window and use the workstations controllers address of 10.1.1.193 (see Figure 306). 28.Verify option 67, Boot file name, from the Available options window for the IBM Network Station class. It should be /QIBM/ProdData/NetworkStation/kernel (see Figure 306). Configuring Twinax IBM Network Station with DHCP 351 Figure 306. DHCP Server Options 29.Click OK. 30.Close the DHCP configuration display. 31.Right-click DHCP from Operations Navigator to open a context menu. Select Start. The DHCP server is now running and configured only to serve the local twinax subnet. 15.1.9 Start the IBM Network Station The twinaxial-attached network stations are different from the normal token-ring or Ethernet stations. The NVRAM options are presented differently, although functionally they remain the same. Details about the differences and new features of the IBM Network Station are outside the scope of this book. It is also assumed that you have cabled the twinax IBM Network Station correctly. When the twinax IBM Network Station is first powered on, it prompts you to specify the address to use for the port to which it is connected. This is not the TCP/IP address. It is an address from 0 to 6 to use on the workstation controller port to which the IBM Network Station is connected. The twinax IBM Network Station requires that you specify the address to use on the port to which it is connected. Therefore, if you are replacing non-programmable terminals with the IBM Network Station, make a note of the address the old device was using, such as port 02 and address 05. Otherwise, you must ensure that no other device is configured to use the same address on the same port. Note 352 AS/400 TCP/IP DNS and DHCP Support To configure the IBM Network Station for use over twinax, perform the following steps: 1. Power on the IBM Network Station. 2. Specify the local controller address to use when prompted to do so. Press Enter. The IBM Network Station checks to see if anyone else is using that address. If not, it uses DHCP to default to startup and continues to boot until completion, provided the DHCP server is started. Detailed instructions on how to reset the IBM Network Station to factory defaults is described in Section 15.4.4.1, “Resetting NVRAM” on page 370. For a detailed reference of the startup tasks that occur when the first twinax attached IBM Network Station is powered on, refer to Section 15.4.4.2, “The Startup Sequence” on page 371. 15.1.10 Summary This scenario attached IP over twinax devices to a local workstation controller on As2.mycompany.com. It also used a contiguous chunk of IP addresses from the network 10.1.1.0 for the twinax subnet. You configured and started the DHCP server on As2.mycompany.com to service the twinax subnet. The IBM Network Station was powered on and the necessary line, controller, device, and TCP/IP interface for the workstation controller was built automatically. 15.2 Transparent Subnet Masking Transparent subnet masking is new to the AS/400 system in V4R2. It uses variable length masks to identify the different subnets and, in terms of connectivity, allows IP over twinax devices to appear as though they were on the local network. The AS/400 system implementation is based on RFC1027, "Using ARP to Implement Transparent Subnet Gateways." The term transparent subnet masking is slightly misleading. Another way to describe it is with the term IP address grouping. Using different masks over the same network ID, you can segment or group contiguous ranges of IP addresses together to use either for twinax subnets or for remote LANs attached to the AS/400 host. The transparency part comes into play when Proxy ARP is enabled, which happens automatically when the hosts on the network share the same network ID. In effect, the subnetting within your network is transparent because a router or gateway is not required to join the subnets. Configuring Twinax IBM Network Station with DHCP 353 Figure 307 shows an example of a network that is using transparent subnet masking and Proxy ARP. Figure 307. Transparent Subnetting Example Figure 307 shows that all the networks and hosts are on the same TCP/IP network ID, 10.1.x.x. The figure is somewhat simplistic, but it shows the concept that, even though the three remote networks are on subnets different from the main ring, each host is the Proxy ARP agent for the subnet beneath it. 15.2.1 ARP and Proxy ARP IP addresses only make sense to the TCP/IP protocol suite. LAN addresses (for example, Ethernet or token ring) are used when an Ethernet frame is sent from one host in the LAN to another. RFC 826 deals with Address Resolution Protocol. The twinax subnet requires a contiguous range of TCP/IP addresses to be assigned to it. You cannot use any address at random from the pool and dynamically allocate an address to a device on the twinax subnet. We recommend, therefore, that you assign the maximum amount of TCP/IP addresses, which is 64, to the twinax subnet if you can. If you assign up to 64 addresses to the twinax subnet, you can easily add additional IP over twinax devices without having to change or shuffle any IP addressing schemes within your network. The limit of 64 devices is imposed by the workstation controller. Note 10.1.2.x 255.255.255.0 10.1.x.x 255.255.0.0 10.1.1.x 255.255.255.0 10.1.3.x 255.255.255.0 Transparent Sub-netting 354 AS/400 TCP/IP DNS and DHCP Support Its purpose is to present a method for converting protocol addresses (IP Addresses) to LAN addresses (for example, Ethernet or Token Ring). Figure 308 shows ARP (address resolution protocol) and RARP (reverse address resolution protocol). Figure 308. Mapping between 32-Bit IP Address and 48-Bit Ethernet Address Figure 309 shows how the ARP cache is built in each host. 1. The TCP/IP protocol on HOST A decides that it wants to transmit to target HOST B at IP address IP(B). 2. The sending host, HOST A, must convert the 32-bit IP address into the 48-bit Ethernet address (assuming Ethernet LAN). This is the function of ARP. 3. ARP broadcasts an ARP request to all the hosts in the network containing HOST B IP address, IP(B), and asking whomever is HOST B to respond with the hardware address MAC(B). 4. The HOST B recognizes that IP(B) is a local interface and sends back its hardware address (Ethernet, for example), MAC(B), to HOST A in an ARP reply. 5. Now HOST A knows HOST B’s hardware address and sends the IP datagram to it. Figure 309. Building the ARP Cache To make the operation of ARP more efficient, each hosts maintains an ARP cache with the most recent mappings from IP addresses to hardware addresses. If hosts A and B are on different networks, HOST B does not receive the ARP broadcast request from HOST A and cannot respond to it. However, if both physical networks are connected by a gateway, the gateway sees the ARP request from HOST A. The gateway also knows, based on subnet number, that the request is for a host on a different physical network (assuming that subnet numbers are made to correspond to physical networks). The gateway then acts as an agent for HOST B, responding to the ARP request from HOST A on behalf of HOST B with the gateway’s hardware address. HOST A sees the reply, caches ARP 10.5.69.212 8:0:20:3:f6:42 RARP 32-bit IP address 48-bit Ethernet address HOST A HOST B 3 IP(B) ? MAC(B) 4 HOST A - ARP Cache Hots Name IP Address MAC Address HOSTB 10.5.69.212 8:0:20:3:F6:2 Configuring Twinax IBM Network Station with DHCP 355 it, and sends future IP packets for HOST B to the gateway. The gateway is acting as an agent for HOST B. This technique is called Proxy ARP. Proxy ARP is discussed on RFC 1027. Figure 310 illustrates this concepts. Figure 310. Proxy ARP Provided that all hosts and devices are on the same network ID, Proxy ARP permits the AS/400 system to join subnets in a fashion similar to the way a router forwards packets from one subnet to another. This scenario implements the class A network ID of 10.0.0.0. While some hosts are on different subnets of the network ID, they are all part of the network 10.0.0.0. Proxy ARP is enabled automatically on the AS/400 system when hosts share a common network ID. Proxy ARP is useful with twinax-attached devices running TCP/IP because it allows the twinax device to appear as part of the local network. Figure 311 on page 356 shows a scenario where the twinax devices are on subnet 10.1.1.192 with the mask of 255.255.255.192. The first hop or gateway off their subnet is the workstation controller 10.1.1.193, which has the same mask. The AS/400 host (AS2) is attached to the LAN 10.1.1.0 with a different mask of 255.255.255.0 through a token-ring interface. The AS/400 system is aware of two local networks, the twinax subnet and the IP LAN. It is the mask setting on the twinax interface that determines the block of addresses for which the IP LAN interface on the AS/400 system needs to Proxy ARP. In this example, the IP LAN interface on the AS/400 system proxies for addresses 10.1.1.192 through 10.1.1.255. The associated local interface specified on the twinax TCP/IP interface tells the TCP/IP stack which LAN IP interface is doing the proxying for the twinax subnet. Figure 311 provides an overview of the AS/400 system implementation of Proxy ARP to support twinax-attached IBM Network Stations. SUBNET A PROXY Gateway HOST A HOST B IP(B) ? MAC(G) IPdatagram IP(B) IPdatagram IP(B) SUBNET B 356 AS/400 TCP/IP DNS and DHCP Support Figure 311. Using Proxy ARP to Support Twinax IBM Network Station -- AS/400 System Implementation A remote, LAN-attached host with a packet to send to one of the twinax-attached devices knows the twinax device IP address, but the IP stack on the host does not know the MAC address of the twinax device. This is essential to completing the datagram and placing it on the network. The source system sends out a broadcast ARP request containing the IP address of the target but not the MAC address. The target AS/400 system (that is, the AS/400 system with the attached twinax devices, AS2) intercepts the ARP broadcast because it knows that the IP address in the ARP packet falls within the range of addresses of the twinax subnet. The AS/400 associated local interface places its own MAC address into the ARP reply. From this point on (or until the ARP cache expires on the remote host), all traffic to the devices on the twinax subnet is sent to the MAC address of the AS/400 associated local interface. The AS/400 system forwards the packet to the twinax device. For outbound traffic from the twinax subnet to a remote host, the twinax workstations forward datagrams to their gateway (the workstation controller), and the gateway passes them on to the AS/400 system. The AS/400 system uses simple IP routing to determine on which interface the datagram belongs. 15.2.2 Twinax Transparent Subnetting The twinax subnet requires a contiguous range of TCP/IP address to be defined and allocated to it. Figure 312 is useful in determining which mask to apply and what range or contiguous groups of addresses you can use. Other Subnets As2.mycompany.com .129 *WSC Twinax subnet Subnet Address: 10.1.1.192 Mask: 255.255.255.192 10.1.1.0 .193 mask 255.255.255.0 Gateway Proxy ARP Gateway Display TCP/IP Interface System: AS2 Internet address . . . . . . . . . . . . . . . : 10.1.1.193 Subnet mask . . . . . . . . . . . . . . . . . : 255.255.255.192 Line description . . . . . . . . . . . . . . . : QTDL806100 Line type . . . . . . . . . . . . . . . . . . : *TDLC Associated local interface . . . . . . . . . . : 10.1.1.129 Interface status . . . . . . . . . . . . . . . : Active Type of service . . . . . . . . . . . . . . . : *NORMAL Maximum transmission unit . . . . . . . . . . : *LIND Automatic start . . . . . . . . . . . . . . . : *YES Configuring Twinax IBM Network Station with DHCP 357 Look at Figure 312 for an example. If you use a mask of 128 in the last octet, you effectively have two address ranges, .1 to .126 and .129 to .254. The subnet boundary addresses .127 and .128 cannot be used. The same applies for a mask of .240. This mask gives you 16 groups of 16 (-2) contiguous addresses. Refer to Figure 312 again. The boundary addresses cannot be used. Figure 312. Subnet Mask Boundaries and Address Ranges Note: • A Host ID of all 0s is a special case and cannot be assigned. • A Host ID of all 1s is a special case (broadcast) and cannot be assigned. • Subnets with mask 252 are used primarily as point-to-point networks (there are only two usable host IDs). • A mask of 254 is not valid and is unacceptable. • You can use subnets with a mask of 255 to map a single IP address to an unnumbered point-to-point network. The next example uses a class C TCP/IP address that has been divided into four different groups or address ranges. Assign three subnet groups for three different TCP/IP over twinax networks and leave a large range of addresses available for the rest of the network. 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248 255 4 12 20 28 36 44 52 60 68 76 84 92 100 108 116 124 132 140 148 156 164 172 180 188 196 204 212 220 228 236 244 252 .128 .192 .224 .240 .248 .0 24 25 26 27 28 29 Mapping Subnet Mask Settings to Host Address Ranges Number of contiguous bits: Mask Settings: This example does not follow the recommendation of allowing a maximum contiguous range of 64 TCP/IP addresses allocated to the twinax subnet. It is intended to provide an example of transparent subnetting. Unless you are forced by your IP addressing scheme to use an example such as this, allocate the maximum number of IP addresses to the twinax subnet. This recommendation is made for future proofing rather than functionality. Note 358 AS/400 TCP/IP DNS and DHCP Support Figure 313. Transparent Subnetting Twinax Scenario with Class C TCP/IP Address Figure 314. Transparent Subnetting Class C Address Example The Local LAN has a network address of 192.168.1.0 and a mask of 255.255.255.0. This mask gives you the entire range of addresses to use in the last byte or octet of the address. However, within the DHCP configuration, you are required to break up the range using masks. You are also required to build a subnet group that ends the range of usable IP addresses at 190. The range of 193 to 254 is reserved for different subnets. The next group of addresses, Twx1, has a subnet address of 192.168.1.192 and a mask of 255.255.255.224. The mask gives you eight blocks of 32 contiguous addresses, of which you use only the block containing the range of addresses from 193 through 222. It is the subnet address 192.169.1.192 that tells you to start at the subnet boundary of 192. Only the range of addresses from 192.168.1.193 to 192.168.1.222 is specified in the DHCP address pool. The third group, Twx2, has a subnet address of 192.168.1.224 and a mask of 255.255.255.240. This mask gives you 16 blocks of 16 contiguous IP addresses, of which you use only the block containing the range of addresses from 225 through 238. It is this range that is specified in the DHCP address pool. The subnet address of 192.168.1.224 tells you that the first address is 192.168.1.225. The last group, Twx3, has a subnet address of 192.168.1.240 and a mask of 255.255.255.240. This mask gives you 16 blocks of 16 contiguous IP addresses, 192.168.1.224 255.255.255.240 192.168.1.240 255.255.255.240 192.168.1.x 255.255.255.0 Transparent Subnetting Twinax Scenario .4 .238 .5 .226 .232 .241 .242 .248 .254 .225 Twinax 1 Twinax 2 Twinax 3 192.168.1.192 255.255.255.224 .193 .194 .222 .2 .3 .189 .190 30 hosts available 14 hosts available 14 hosts available 255 hosts available Local Lan Twx 1 Twx 2 Twx 3 .0 .191.192 .223.224 .239.240 192.168.1.x Address space .255 Configuring Twinax IBM Network Station with DHCP 359 of which you only use the block containing the range of addresses from 241 through 254 is used. It is the network address of 192.168.1.240 that tells you to start at address 241. The valid range of addresses is from 241 through 254. Use all of this range in the DHCP address pool. 15.3 Configuring Twinax IBM Network Station with Local DHCP Server This scenario attaches the IBM Network Station to a local workstation controller on the AS/400 system. The local workstation controller is CTL01 (or QCTL), the same controller that supports the system console. Use DHCP to configure the workstation controller with an IP address and to serve the IBM Network Stations with network start-up information. You are using a network addressing scheme that enables Proxy ARP automatically. It also allows the IBM Network Station to see and be seen across the network. Figure 315 on page 359 shows a high-level view of the topology used in this scenario. Figure 315. Twinaxial-Attached IBM Network Stations Running TCP/IP over Twinax The twinax-attached IBM Network Stations are connected to the backup DHCP server, AS2. This system is the backup DHCP server in this scenario. The IBM Network Stations receive start-up information from the AS2 system. The IBM Router DHCP Server AS1 Subnet 1 Subnet 3 Subnet 2 C2 C3 C1 BOOTP/DHCP relay agent AS5 Always relays to both DHCP servers BOOTP/DHCP relay agent R2 Always relays to R1 DHCP Server AS2 *WSC Twinaxial attached IBM Network Stations 360 AS/400 TCP/IP DNS and DHCP Support Network Stations are on their own subnet, which has the same TCP/IP network address as subnet 1. Once the IBM Network Stations have been started, demonstrate that Proxy ARP is working by pinging the IBM Network Stations from a remote host on the same network. 15.3.1 Scenario Objectives This scenario has the following objectives: 1. Configure the DHCP server AS2 to support the locally attached twinax IBM Network Stations. 2. Set up and start the twinax-attached IBM Network Stations. 3. Ensure LAN connectivity across the network. This scenario also explains how Proxy ARP works to make the IBM Network Stations visible on a subnet to which they are not directly attached. 15.3.2 Scenario Advantages The advantages of this scenario are as follows: • The ease with which you connect twinax-attached IBM Network Stations to an existing network. • The simplicity of configuring DHCP to support the twinax-attached IBM Network Stations. • The automatic routing of datagrams from the twinax subnet to the attached LAN and vice versa when using Proxy ARP. 15.3.3 Scenario Disadvantages The following disadvantages apply to this scenario: • You might need to understand underlying concepts such as subnetting and Proxy ARP if your network has a somewhat restricted addressing scheme. 15.3.4 Scenario Network Configuration Figure 316 shows the network configuration for this scenario. Configuring Twinax IBM Network Station with DHCP 361 Figure 316. Scenario Network Topology with an IP over Twinax Subnet The twinax subnet address 10.1.1.192 with the mask of 255.255.255.192 is a subset of the network 10.1.1.x. It lets you use 64 TCP/IP addresses on the twinax subnet. This is also the maximum number of IP addresses that you can allocate to the twinax subnet because it is the maximum number of devices that the workstation controllers supports. The following characteristics influence this scenario: • A subnet was carved out of the address space 10.1.1.0, and a mask was applied to reduce the number of valid TCP/IP addresses to 64. • The DHCP server, AS2, is required to service the twinax subnet. • AS2, the backup DHCP server for the network, is the primary DHCP server for the twinax subnet. • AS2 is the only DHCP server for the twinax subnet. If AS2 fails, then the twinax subnet loses connectivity to the host. This is because the host, AS2, powers the twinax network. • The IP address range of 10.1.1.1 through 10.1.1.254 is divided in half between AS1, the primary DHCP server, and AS2, the backup DHCP server. • AS2, the backup DHCP server, administers IP addresses in the range from 10.1.1.128 through 10.1.1.254. • The twinax subnet is serviced from the range of addresses from 10.1.1.192 through 10.1.1.254. 362 AS/400 TCP/IP DNS and DHCP Support • The twinax network must be able to see out onto the LAN. • The LAN-attached devices and hosts must be able to communicate with the twinax subnet. 15.4 Task Summary These setup tasks assume that you are installing twinax-attached IBM Network Stations in an existing network similar to the example in Figure 316 on page 361. This scenario uses the backup DHCP server on the network as the primary DHCP server for the twinax-attached IBM Network Stations. This is essentially any DHCP server that has locally attached twinax IBM Network Stations. The point to remember is that, depending on the TCP/IP addressing scheme, you must ensure that the address pool for the twinax subnet is not duplicated in another pool on another DHCP server within the network. Operations Navigator DHCP server configuration does not allow you to create duplicate IP addresses in two subnets of the same DHCP server. From the range of addresses from 10.1.1.128 through 10.1.1.254 (the address range administered by AS2), you must carve another subnet for the twinax devices that starts at 10.1.1.191 and ends at 101.1.254. The tasks required to complete this scenario are as follows: 1. Plan the TCP/IP addressing scheme. 2. Carve out 64 IP addresses from the address pool of 10.1.1.128 through 10.1.1.254 to use for the twinax subnet on AS2. 3. Configure the DHCP server for twinax support. 4. Configure and start the IBM Network Station. 5. Test connectivity. 15.4.1 Plan the TCP/IP Addressing Scheme In a TCP/IP network with multiple subnets and TCP/IP address ranges, it is imperative to pay careful attention to the addressing scheme. This topic shows you in detail the addressing scheme to use for the twinax subnet in this scenario. Use a Class A type of IP address from the Internet Assigned Numbers Authority (IANA) on your internal network. This type cannot be routed through the Internet, yet still provides you with good growth potential for the future. From the existing LAN IP address space, carve a contiguous range of 64 IP addresses. These are for the IBM Network Stations to use on the twinax subnet. Use the last 64 addresses of the range from 10.1.1.128 through 10.1.1.254. Use 10.1.1.192 as the network address for the twinax subnet. Apply a mask of 255.255.255.192, which gives you the maximum allowed TCP/IP address range (64) that you can use on a twinax subnet. The usable host IP address range is from 0.0.0.1 through 0.0.0.63. Write the full TCP/IP address range as 10.1.1.193 through 10.1.1.254. Configuring Twinax IBM Network Station with DHCP 363 Since this address range is a subset of the main network (10.1.1.x), Proxy ARP is enabled automatically. 15.4.2 Carve out 64 Addresses from the Administered Address Pool The back-up DHCP server, AS2, services the twinax-attached IBM Network Stations with IP addresses in the address that this server administers (10.1.1.128 through 10.1.1.254). The range of addresses that you choose for the twinax subnet is extremely important. If you have the ability in your network to allocate 64 IP addresses to the twinax subnet, you must do so and forget about the IP addresses that are not used. It becomes extremely difficult in networks other than class A to reallocate and shift addressing schemes around simply to gain another IP address to install another IP over twinax device. Note 364 AS/400 TCP/IP DNS and DHCP Support The twinax subnet addresses that you use must be and are a subset of the address space 10.1.1.0. This means that you must divide the backup address pool (the range of addresses from 10.1.1.128 through 10.1.1.254 that exists on the backup DHCP server, AS2) into two ranges. One range is for the twinax subnet. Use the remaining addresses to service the main LAN. Split the address space from 10.1.1.128 through 10.1.1.254 in half to give two ranges of 64 addresses each. Accomplish this by applying a mask of 255.255.255.192 to the subnet ID 10.1.1.128 in the DHCP configuration. This creates the range of addresses from 10.1.1.128 through 10.1.1.191 as the first half of the pool. You must define a subnet within the server configuration where, when the mask is applied to the subnet address, the server’s LAN interface falls into the valid range of the subnet. In other words, there must be a subnet in the configuration file where, if the entire range is used, the server interface is in that range. The server does not need to administer the entire range. You must exclude the server's IP address from the administered range to prevent the server from giving away its own address. For example, Subnet 10.1.1.128 with the mask 255.255.255.128 has a valid range of addresses from 10.1.1.128 through 10.1.1.254. Assume that the administered range of this subnet is from 10.1.1.128 through 10.1.1.200. For the server to hand out addresses to locally attached clients, the server’s LAN IP address must fall within the valid range of the subnet (that is, from 10.1.1.128 through 10.1.1.254). This means that a server with an IP address of 10.1.1.205 works, but a server IP address of 10.1.1.3 does not because it is not in the subnet range from 10.1.1.128 through 10.1.1.254. If the server’s IP address is 10.1.1.3 and a DHCP discovery arrives on that interface, the DHCP server states that there are no addresses available for that subnet. There is a method to group the valid subnet addresses to administer together with a subnet that encompasses the server’s IP address. Defining a subnet group allows the server to pull addresses out of the other subnet that has the real range. For example, Subnet 10.1.1.128 with a mask of 255.255.255.128 has an administered range from 10.1.1.128 through 10.1.1.200. The server’s IP address is 10.1.1.3, which is outside the entire subnet range. Define a second subnet in the server that encompasses the server’s IP address, such as 10.1.1.0 with the mask 255.255.255.248. This creates the entire valid range from 10.1.1.1 through 10.1.1.6, but you must change the administered range to 10.1.1.3 through 10.1.1.3 and then exclude 10.1.1.3 from the pool. Group both the 10.1.1.128 subnet and the 10.1.1.0 subnet together to form one subnet group. This allows you to administer the addresses from 10.1.1.128 through 10.1.1.200 and to keep your server’s address at 10.1.1.3. Do not administer any subnet addresses in the range from 10.1.1.1 through 10.1.1.7. Note Configuring Twinax IBM Network Station with DHCP 365 The second half of the address space now starts at 10.1.1.192. Applying the mask 255.255.255.192 in the DHCP configuration tells the server to use the next 64 addresses. This means that the range is now from 10.1.1.192 through 10.1.1.254. This is the twinax subnet. Once you have divided the address space from 10.1.1.128 through 10.1.1.254 into two ranges, refer to Figure 317 on page 365 for a visual representation. Figure 317. Applying Subnet Masks to Split Address Range 10.1.1.128 through 10.1.1.191 Define the first pool in the DHCP server configuration. Refer to Figure 318 for an example of the DHCP configuration of the backup LAN range of addresses from 10.1.1.128 through 10.1.1.190. Figure 318. DHCP Configuration -- Dividing the 10.1.1.128 Address Pool with a Mask .128 .192 .224 .240 .248 .0 Mask Settings: 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248 255 4 12 20 28 36 44 52 60 68 76 84 92 100 108 116 124 132 140 148 156 164 172 180 188 196 204 212 220 228 236 244 252 Mapping Subnet Mask Settings to Host Address Ranges 24 25 26 27 28 29 Number of contiguous bits: Backup LAN range Twinax Subnet range 366 AS/400 TCP/IP DNS and DHCP Support For this group, select the Options tab and configure DHCP option 1 to pass along the real mask to use on this network, which is 255.255.255.0. Configure any other additional options that clients on the main network require. 15.4.3 Configure the DHCP Server AS2 for Twinax Support Operations Navigator DHCP configuration is twinax-aware. Provided you have planned which IP addresses to use on the twinax subnet, the configuration is extremely straightforward. On the backup DHCP server (AS2), which has the IBM Network Stations attached by way of twinax, follow these steps to configure DHCP support for TCP/IP over twinax: 1. Start the AS/400 Operations Navigator. 2. Click As2.mycompany.com to select the system. Figure 319. AS/400 Operations Navigator -- Selecting the System to Configure DHCP Server 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This displays the DHCP server configuration window, as shown in Figure 320. Configuring Twinax IBM Network Station with DHCP 367 Figure 320. DHCP Server Configuration 7. From the DHCP Server Configuration window, ensure Global is highlighted. Select File>New>Subnet-Advanced, as shown in Figure 320. 8. Ensure the General tab is selected. Specify TwinaxSubnet10.1.1.192 in the field labeled Name. Specify a network ID to make it easier to distinguish the twinax subnet from the other subnets. 9. Check Twinax subnet to enable it. 10.Specify the IP address of the workstation controller in the field Controller’s IP address. Use the first usable address of 10.1.1.193. 11.Specify a short description in the field labeled Description, as shown in Figure 321. Figure 321. DHCP Server Configuration Twinax Subnet 12.Click the Address Pool tab. 368 AS/400 TCP/IP DNS and DHCP Support You see in Figure 322 on page 368 that the DHCP configuration dialog has already calculated the correct IP address range. The dialog calculated this range based upon the network ID and the IP address that you have used for the twinax workstation controller. You can change the subnet mask on this dialog and have the DHCP configuration GUI calculate the values for you. Remember, though, that the maximum number of address that can be allocated to the twinax subnet is 64. Figure 322. DHCP Twinax Address Pool Range 13.Click the Leases tab. 14.Set the lease time to Never expire for the twinax subnet. 15.Click the Options tab. You need to add the following options for the DHCP server to serve to the twinax-attached network stations: Option Value 1 Subnet Mask 255.255.255.192 3 Router 10.1.1.193 (the WSC is the first hop-attached device) 66 Server name 10.1.1.193 67 Boot file name /QIBM/ProdData/NetworkStation/kernel Note: You should not need to add option 67; it is included in the twinax IBM Network Station class, IBMNS 3.4.1. 1. To add the options, highlight the option number from the window on the left labeled Available options. You do not need to exclude the workstation controller IP address from the range. It is excluded automatically. Tip Configuring Twinax IBM Network Station with DHCP 369 2. Click Add. 3. Fill in the value for each option in the window at the bottom of the display (see Figure 323 on page 369). Figure 323. Twinax-Attached DHCP Options Configuration 16.Click OK. 17.Close the DHCP configuration window. If the DHCP server is running, you are asked to save the changes you made. Click Yes. If the DHCP server is not running, the configuration GUI closes. 18.Start the DHCP server. 15.4.4 Configure and Start the IBM Network Station The twinaxial-attached network stations are different from the normal token ring or Ethernet stations. The NVRAM options are presented differently, although functionality remains the same. This section does not go into details about these differences. Any new features of the IBM Network Station fall outside the scope of this book. It is assumed that you have cabled the twinax IBM Network Station correctly. When the twinax IBM Network Station is first powered on, it prompts you to specify the address to use for the port to which it is connected. This is not the TCP/IP address. Rather, it is an address from 0 through 6 to use on the workstation controller port to which the IBM Network Station is connected. Follow these steps to configure the IBM Network Station for use over twinax. 1. Power on the IBM Network Station. The twinax IBM Network Station requires you to specify an address to use on the port to which it is connected. Therefore, if you are replacing non-programmable terminals with the IBM Network Station, you must make a note of the address that the old device was using, such as port 02 and address 05. Otherwise, you must ensure that no other device is configured to use the same address on the same port. Note 370 AS/400 TCP/IP DNS and DHCP Support 2. When prompted to do so, specify the local controller address to use. Press Enter. The IBM Network Station checks to see if anyone else is using that address. If no one else is using that address, the IBM Network Station defaults to startup using DHCP. It continues booting until completion, provided the DHCP server is started. 15.4.4.1 Resetting NVRAM If the IBM Network Station has been used previously and you are not sure what has been entered into the NVRAM, follow these steps to reset the NVRAM to the factory defaults: 1. Power on the IBM Network Station. You see the IBM logo followed by a memory and keyboard check. 2. After seeing the message NS0500 Search for Host System, press the ESC key to stop the startup sequence. If prompted for an administrator password, enter it now. (This is the password an administrator sets using the IBM Network Station Manager program.) 3. Invoke the IBM Network Station Boot Monitor program by pressing the following key sequence: • For 101/102 keyboards: Press and hold Left Shift + Left Alt + Left Ctrl. Press F1. • For 5250/3270 keyboards: Press and hold Left Shift + Left Alt. Press F1. 4. Enter NV at the Boot Monitor prompt (>) to access the NVRAM utility. 5. Enter L to reset the NVRAM. 6. Enter S to save the defaults into NVRAM. 7. Enter Y to the question "Are you sure?" 8. Enter Q to quit. 9. Power the IBM Network Station off and then on again. It starts with the factory settings. Alternatively, you can specify the twinax address to use by pressing the ESC key when the message NS0500 Search for Host System appears and selecting option 8, Set Twinax Station Address. Once you have set the IBM Network Station to factory defaults and specified the correct twinax address, the IBM Network Station attempts to start using DHCP first. The IBM Network Station starts without further intervention. Configuring Twinax IBM Network Station with DHCP 371 15.4.4.2 The Startup Sequence When the first IP over twinax IBM Network Station starts, OS/400 checks to see if a TCP/IP interface of type *TDLC exists. If not, the workstation controller calls the program QSYS/QTODDTWX to query the DHCP server configuration file (dhcpsd.cfg) for a TCP/IP address and mask to use. The system automatically builds a QTDLxxxxxx line, controller, and device for TCP/IP to associate with and run over. A device type of 5150 is created underneath the workstation controller description. Figure 324 shows the QTDLxxxxxx objects that autoconfiguration creates. Figure 324. QTDLxxxxxx Line, Controller, and Device Descriptions Figure 325 shows the twinaxial data link control line description. Figure 325. QTDLxxxxxx Line Description Any time you configure or reset the IBM Network Station we strongly recommend that you disable the BOOTP protocol on the IBM Network Station. The IBM Network Station defaults to a priority scheme where it sends a DHCPDISCOVER first and, if it does not receive a response, it switches to BOOTP. Sometimes the IBM Network Station times out while the DHCP server is processing its DHCPDISCOVER and the IBM Network Station switches to BOOTP without waiting for the DHCPOFFER from the DHCP server. If this happens, the server assigns a permanent address to it. To disable the BOOTP protocol in the IBM Network Station, from the Set up Utility display, press F5, Set the Network Parameters, select 1 for DHCP and D for BOOTP. Tip Work with Configuration Status System: AS2 Position to . . . . . Starting characters Opt Description Status -------------Job-------------- QTDL806100 ACTIVE QTDL8NET ACTIVE QTDL8TCP ACTIVE QTCPIP QTCP 022134 Display Line Description AS2 03/02/98 17:55:22 Line description . . . . . . . . . : QTDL806100 Option . . . . . . . . . . . . . . : *BASIC Category of line . . . . . . . . . : *TDLC Attached work station ctl . . . . : CTL01 Network controller . . . . . . . . : QTDL8NET Online at IPL . . . . . . . . . . : *NO Text . . . . . . . . . . . . . . . : CREATED BY AUTO-CONFIGURATION 372 AS/400 TCP/IP DNS and DHCP Support The local workstation controller description shown in Figure 326 contains the name of the QTDLxxxxxx line that auto-configuration built. Figure 326. Workstation Controller Description -- CTL01 The device created underneath the workstation controller CTL01 is shown in Figure 327. Figure 327. Device Type 5150 Under CTL01 The system automatically creates a TCP/IP interface for the workstation controller as well. With one exception, this is similar to any other TCP/IP interface that you have configured. The TCP/IP interface for the workstation controller contains a parameter that allows you to utilize Proxy ARP. This parameter is called the Associated Local Interface (*LCLIFC), and its value must be the LAN interface of the AS/400 system where the twinax workstation controller resides. In this case, the value for the associated local interface is 10.1.1.129. To view the associated local interface parameter, perform the following steps: 1. On the AS/400 system’s command line, enter CFGTCP. 2. Select option 1, Work with TCP/IP interfaces. Display Controller Description AS2 03/04/98 08:02:39 Controller description . . . . . . : CTL01 Option . . . . . . . . . . . . . . : *BASIC Category of controller . . . . . . : *LWS Controller type . . . . . . . . . : 6050 Controller model . . . . . . . . . : 1 Resource name . . . . . . . . . . : CTL01 TDLC line . . . . . . . . . . . . : QTDL806200 Online at IPL . . . . . . . . . . : *YES Auto-configuration controller . . : *YES Text . . . . . . . . . . . . . . . : CREATED BY AUTO-CONFIGURATION Display Device Description AS2 03/02/98 18:02:45 Device description . . . . . . . . : DSP04 Option . . . . . . . . . . . . . . : *BASIC Category of device . . . . . . . . : *DSP Device class . . . . . . . . . . . : *LCL Device type . . . . . . . . . . . : 5150 Device model . . . . . . . . . . . : 3 Port number . . . . . . . . . . . : 2 Switch setting . . . . . . . . . . : 2 Internet address . . . . . . . . . : 10.1.1.194 Online at IPL . . . . . . . . . . : *YES Attached controller . . . . . . . : CTL01 Keyboard language type . . . . . . : USB Print device . . . . . . . . . . . : *SYSVAL Output queue . . . . . . . . . . . : *DEV Configuring Twinax IBM Network Station with DHCP 373 3. Select option 5, Display, beside the interface that has a line type of *TDLC. Press Enter (see Figure 328). Figure 328. TCP/IP Interface for the Local Workstation Controller 15.4.5 Test Connectivity Now that you have configured the DHCP server for the twinax environment, started a twinax-attached IBM Network Station, and ensured that the associated interface parameter is correct in the TCP/IP interface of type *TDLC, you can test for connectivity across your network. To prove that the IBM Network Station sees out past the local workstation controller, start a 5250 TELNET session to host As5.mycompany.com, which has the TCP/IP address of 10.1.1.4. The real test of Proxy ARP is to ping the twinax-attached IBM Network Station from a remote host. From As5.mycompany.com, send an ICMP echo, or ping, to the address 10.1.1.194 and wait for a reply. 15.4.6 Summary This scenario installed a twinax subnet on the backup DHCP server. The twinax address range that you used is a subset of the address space 10.1.1.x. The backup DHCP server already had a range defined within DHCP that included the address you needed to use for the twinax subnet. This range was broken down into two groups, and a restrictive mask was placed over the range during the DHCP configuration. This allowed you to stop the range at 10.1.1.191. You built a DHCP server configuration for the twinax subnet and powered on the twinax-attached IBM Network Station, which automatically built a TCP/IP interface for the workstation controller and a TDLC line description. Once you started the IBM Network Station, you tested connectivity to the rest of the network by starting a TELNET session to a remote host and sending an ICMP echo (ping) to the network station from a remote host. 15.5 Configuring Twinax Network Station with a Remote DHCP Server This topic demonstrates how to configure and use a remote DHCP server to supply network information to twinax-connected IBM Network Stations. Display TCP/IP Interface System: AS2 Internet address . . . . . . . . .. : 10.1.1.193 Subnet mask . . . . . . . . . . .. : 255.255.255.192 Line description . . . . . . . . . . . . . . : QTDL806100 Line type . . . . . . . . . . . . . . . . . : *TDLC Associated local interface . . . . : 10.1.1.129 Interface status . . . . . . . . . . . . . . : Active Type of service . . . . . . . . . . . . . . : *NORMAL Maximum transmission unit . . . . . . . . . : *LIND Automatic start . . . . . . . . . . . . . . : *YES 374 AS/400 TCP/IP DNS and DHCP Support It is not necessary to use the same system as your DHCP server to which the twinax-attached IBM Network Stations are connected. You can utilize another DHCP server in your network. This section does not discuss how to have the locally attached IBM Network Stations load their kernel and terminal configuration settings from a different host. This has already been discussed in Chapter 11.6, “Selecting the Bootstrap Host for the IBM Network Station” on page 252. Refer to this chapter for more information. Load the kernel and terminal configuration settings from the local system. 15.5.1 Scenario Overview In this scenario, there are twinax-attached IBM Network Stations connected to a local system that is not running the DHCP server. The local system is and must be running the BOOTP/DHCP Relay Agent. Locally attached IP over twinax devices have their DHCP DISCOVER messages forwarded to a DHCP server that is running on a different system. This is done to obtain a network address and the startup information that is required to boot up. Figure 329. Using Remote DHCP Server to Configure Twinax IBM Network Stations Figure 329 shows the logical network topology that is used in this scenario. The network has been simplified from the previous scenario so the only subnet for which the DHCP server is configured is the subnet 10.1.1.x. There is no backup DHCP server on the network. The twinax IBM Network Stations are attached to Configuring Twinax IBM Network Station with DHCP 375 the BOOTP/DHCP Relay Agent, which forwards all DHCP broadcasts originating from the twinax subnet to the primary DHCP server. 15.5.2 Scenario Objectives This scenario’s objective is: To use one primary DHCP server to supply network information to remote twinax attached devices. This objective also means that you do not have to run a DHCP server agent on every AS/400 system with twinax-attached IBM Network Stations. You can set up a backup DHCP server and have the local BOOTP/DHCP Relay Agent send the DHCP discovers to both DHCP servers. See 14, “Multiple Subnets, DHCP Servers, and Relay Agents” on page 313 for more information on providing a backup DHCP server. 15.5.3 Scenario Advantages The advantage that this scenario provides is that: You need only one DHCP server in your network to support twinax-attached IBM Network Stations. It is not necessary to run a DHCP server on every AS/400 system that has twinax-attached IBM Network Stations. 15.5.4 Scenario Disadvantages A disadvantage of this scenario is that you need to understand underlying concepts such as subnetting and Proxy ARP if your network has a somewhat restricted addressing scheme. 15.5.5 Task Summary In these setup steps, the assumption is made that you have cabled the IBM Network Station correctly and that you have defined a local twinax address. The assumption is also made that the IBM Network Station starts as a DHCP client. The following tasks start from the point when the first IBM Network Station is ready to be powered on: 1. Configure the local AS/400 DHCP configuration file on As2.mycompany.com. 2. Power on and off the IBM Network Station. This builds the TCP/IP interface on the AS/400 system for the workstation controller automatically. 3. Configure and start the BOOTP/DHCP Relay Agent on As2.mycompany.com. 4. Change the DHCP configuration for the pool of addresses from 10.1.1.1 through 10.1.1.254 on As1.mycompany.com. 5. Configure an address pool for the twinax subnet on the remote DHCP server (As1.mycompany.com). 6. Start the IBM Network Station. 376 AS/400 TCP/IP DNS and DHCP Support 15.5.6 Configure the Local DHCP Configuration File on AS2 You must build a DHCP server configuration file (dhcpsd.cfg) on the system to which the twinax subnet is directly attached. You do not start the DHCP server on this system, but the configuration file must exist. When you power on the first IBM Network Station, the workstation controller calls the program QSYS/QTODDTWX. This program queries the DHCP configuration file for its IP address and mask. Use Operations Navigator to create this file. Please refer to Section 15.4.3, “Configure the DHCP Server AS2 for Twinax Support” on page 366 and follow steps 1 through 12. This scenario uses the same IP addressing scheme that is defined in that section. It is unnecessary to configure the DHCP server on AS2 to provide options for the twinax-attached devices because this DHCP server is not used. Note: Ensure that the DHCP server is in a stopped state once you have completed the configuration. Do not start the DHCP server. To be safe, disable the subnet on this system by right-clicking on the subnet and clicking on Disable. 15.5.7 Power on the IBM Network Station The network station must be started for the AS/400 system to build the TCP/IP interface and line description for the workstation controller. Power on the twinax-attached IBM Network Station. The message NS0510 System 10.1.1.193 contacted is an indication that the system has completed building the TCP/IP interface for the workstation controller. The IBM Network Station sits on this message because a DHCP server cannot respond to this request and should be powered off again at this stage. Use the AS/400 CFGTCP command and specify option 1, Work with TCP/IP interfaces. There is now be an interface with the type of *TDLC. This is the TCP/IP interface of the workstation controller. Once this interface exists, move on to the next step. 15.5.8 Configure and Start BOOTP/DHCP Relay Agent on Local AS/400 System (AS2) After building the DHCP configuration file for the twinax subnet, it is time to turn As2.mycompany.com into a BOOTP/DHCP Relay Agent. In this scenario, you are configuring the AS/400 BOOTP/DHCP Relay Agent to forward DHCP messages directly and without delay from the twinax subnet to the primary DHCP server. If you are curious and want to see what the automatic configuration did, use the AS/400 CFGTCP command and specify option 1, Work with TCP/IP interfaces. There should now be an interface with the type of *TDLC. This is the TCP/IP interface of the workstation controller. Tip Configuring Twinax IBM Network Station with DHCP 377 If another DHCP server exists in your network, you can forward DHCP messages from the twinax subnet to that DHCP server as well. Refer to 14, “Multiple Subnets, DHCP Servers, and Relay Agents” on page 313 for more information. To configure the AS/400 BOOTP/DHCP Relay Agent, perform the following steps: 1. Sign on to the AS/400 system. From a command line, enter the CHGDHCPA MODE(*RELAY) command, and press Enter. This changes the mode of the DHCP server to be a BOOTP/DHCP Relay Agent. 2. From Operations Navigator select As2.mycompany.com>Network>Servers>OS400, and right-click BOOTP/DHCP Relay Agent, as shown in Figure 330. 3. Click Configuration to select it. Figure 330. AS/400 Operations Navigator -- Configuring BOOTP/DHCP Relay Agent 4. The BOOTP/DHCP Relay Agent properties window appears. Click the Start when TCP/IP is started check box to ensure that it is checked. 5. Click Add. 6. Use the pull-down option on the Interface address field at the top of the dialog to select the TCP/IP interface from which the BOOTP/DHCP Relay Agent accepts DHCP packets. This is the workstation controller interface that has just been built 10.1.1.193. 7. Specify the IP address of the primary DHCP server to which the DHCP messages from the clients are sent. Use the address on the primary DHCP server 10.1.1.2. Refer to Figure 316 on page 361. Note: Specify the system name if your DNS server resolves IP addresses or if you have configured your host table correctly. 8. Leave the Maximum hops set to the default of 4. 9. Leave the Packet transmission delay at zero. 378 AS/400 TCP/IP DNS and DHCP Support Figure 331. BOOTP/DHCP Relay Agent Configuration 10.Click OK. 11.From Operations Navigator, right-click BOOTP/DHCP Relay Agent to open a context menu. Select Start to start the server. The BOOTP/DHCP Relay Agent now forwards DHCP messages from the workstation controller interface to the primary DHCP server. 15.5.9 Change the DHCP Server Configuration for the Address Pool 10.1.1.x on AS1 The twinax subnet addresses that you use must be and are a subset of the address space 10.1.1.x. Because of this, you must break the pool of addresses from 10.1.1.1 through 10.1.1.254 into two ranges. You must also reduce the pool so that it does not include the addresses from 10.1.1.192 through 10.1.1.254. These addresses are used for the twinax subnet. In this case, you must break up the address range from 10.1.1.1 through 10.1.1.254 into two groups by applying masks to the range within the DHCP configuration. You must then group the two groups back together within the DHCP configuration to form one pool. You also need to use DHCP option 1 to specify and to pass back to the client the correct mask to use on this subnet. The masks that are needed to split the range also reduce the pool so it does not include the twinax subnet addresses. To split the group into two pools and allow the address range to end at 10.1.1.191, apply the mask 255.255.255.128 in the DHCP configuration. This allows two groups of 128 addresses. This is the first group, which starts at 10.1.1.1 and ends at 10.1.1.127. The second group has the mask 255.255.255.192 applied to it, which creates the range of addresses from 10.1.1.128 through 10.1.1.191. Note: You cannot use the subnet boundary addresses. Therefore, you lose three IP addresses from this range. Configuring Twinax IBM Network Station with DHCP 379 Refer to Figure 332 for a visual representation of the address space from 10.1.1.1 through 10.1.1.191. Figure 332. Applying Subnet Masks to Split Address Range 10.1.1.1 through 10.1.1.191 You must define the two pools in the DHCP configuration. Refer to Figure 333 for an example of the DHCP configuration of the first group and to Figure 334 on page 380 for an example of the DHCP configuration of the second group. Figure 333. DHCP Configuration -- Dividing the 10.1.1.1 Address Pool with a Mask, Group #1 .128 .192 .224 .240 .248 .0 Mask Settings: 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248 255 4 12 20 28 36 44 52 60 68 76 84 92 100 108 116 124 132 140 148 156 164 172 180 188 196 204 212 220 228 236 244 252 Mapping Subnet Mask Settings to Host Address Ranges 24 25 26 27 28 29 Number of contiguous bits: 380 AS/400 TCP/IP DNS and DHCP Support Figure 334. DHCP Configuration -- Dividing the 10.1.1.1 Address Pool with a Mask, Group #2 For both of these groups, select the Options tab and configure DHCP option 1 to pass the real mask to use on this network, which is 255.255.255.0. You also must configure any other relevant options that clients on the main network require. The next step is to group the two address ranges together again to form one pool in the DHCP server configuration. To form a subnet group within the DHCP configuration, perform the following steps: 1. From the Operations Navigator DHCP window, right-click Global. 2. Click New Subnet Group to select it. 3. Specify a valid description in the Name field. Blanks are not valid. 4. Highlight the first address group and click Add. Repeat this step for the second group. Note: Step 5 is optional. 5. Click the Address Order tab. Click either In order or Balanced to select the appropriate option. In order is the default. 15.5.10 Configure the Twinax Subnet Address Pool on the Remote DHCP Server You now have to add a TCP/IP address pool on the primary DHCP server to provide network start-up information to the remote twinax clients. To accomplish this, you do not build a special twinax subnet pool (as discussed in Section 15.4.3, “Configure the DHCP Server AS2 for Twinax Support” on page 366). Instead, build a normal IP address pool, as the twinax-attached stations are not attached locally. 1. Open the DHCP configuration window from Operations Navigator. 2. Right-click Global to open a context menu. Select New Subnet -- Advanced. Configuring Twinax IBM Network Station with DHCP 381 3. Click the General tab and name the subnet AS.2RemoteTwinax. In the Description field, specify Remote twinax subnet on As2.mycompany.com. Note: Do not click Twinax Subnet. Leave this box unchecked. Refer to Figure 335. Figure 335. Remote Twinax DHCP Configuration Example 4. Click the Address Pool tab. 5. Click Subnet Address and specify the twinax subnet address as 10.1.1.192. 6. In the Subnet mask field, specify the mask 255.255.255.192. Refer to Figure 336. Figure 336. Remote Twinax IP Address Pool Example 382 AS/400 TCP/IP DNS and DHCP Support 7. Click the Options tab and add the following options to send to the remote twinax-attached client: Option Value 1 Subnet Mask 255.255.255.192 3 Router 10.1.1.193 (the WSC is the first hop for attached devices.) 66 Server name 10.1.1.193 67 Boot file name /QIBM/ProdData/NetworkStation/kernel 8. Click OK. 9. Update or start the DHCP server on As1.mycompany.com. 15.5.11 Start the IBM Network Station Start the IBM Network Station again. It now boots to completion. 15.5.12 Summary This scenario built a DHCP configuration file on the local AS/400 system, AS2, from which the workstation controller obtains the network information. The first IBM Network Station that powers on causes the workstation controller to query the DHCP configuration file. The workstation controller gains network information, and the TCP/IP interface is built automatically. You configured the AS/400 system to which the twinax IBM Network Stations are attached locally as a BOOTP/DHCP Relay Agent. You split the address pool 10.1.1.x on the DHCP server into two parts with restrictive masks and then re-grouped them to form a single pool. You added an IP address pool for the twinax subnet to an existing DHCP server. You also had the BOOTP/DHCP Relay Agent forward DHCP messages from the locally attached twinax subnet to the remote DHCP server. Once all of the configuration was complete, you started the IBM Network Station again. This time, the DHCP messages were forwarded by the local BOOTP/DHCP Relay Agent to the remote DHCP server, and the IBM Network Station gained the network start-up information that it needed to boot. 383 AS/400 TCP/IP DNS and DHCP Support 15.6 Configuring Twinax IBM Network Station Using Transparent Subnetting This scenario demonstrates the concepts of transparent subnetting that are described in Section 15.2.2, “Twinax Transparent Subnetting” on page 356. 15.6.1 Scenario Overview This scenario uses three IP-over-twinax subnets. Two of these subnets are attached to one AS/400 system and the other subnet is attached to a different AS/400 system. Use a remote DHCP server on another system to store and serve the necessary network start-up information for all three twinax subnets. Use a class C IP addressing scheme of 192.168.1.0 and split or group that address space into four contiguous address ranges. The logical network topology is shown in Figure 337 on page 383. Figure 337. Transparent Subnetting and Twinax IBM Network Station Configuration The systems As2.mycompany.com and As5.mycompany.com are both BOOTP/DHCP Relay Agents. They both forward DHCP messages from the attached twinax subnets to As1.mycompany.com, the primary DHCP server. As1.mycompany.com contains the complete TCP/IP address configuration for this network. There is no backup DHCP server in this scenario. 192.168.1.224 255.255.255.240 192.168.1.240 255.255.255.240 192.168.1.x 255.255.255.0 .4 .238 .5 .226 .232 .241 .242 .248 .254 .225 Twinax 1 Twinax 2 Twinax 3 192.168.1.192 255.255.255.224 .193 .194 .222 .2 .3 .189 .190 As1.mycompany.com Primary DHCP server As5.mycompany.com BOOTP/DHCP relay agent As2.mycompany.com BOOTP/DHCP relay agent .1 384 AS/400 TCP/IP DNS and DHCP Support 15.6.2 Scenario Objectives The objective of this scenario is to demonstrate how to configure the DHCP server and to use transparent subnetting when a contiguous block of 64 IP addresses is unavailable. 15.6.3 Scenario Advantages This scenario shows how to use transparent subnetting to solve the problem that results when all of the contiguous IP addresses that you need to configure the twinax IBM Network Stations are unavailable. 15.6.4 Scenario Disadvantages This example has the disadvantages associated with using a class C addressing scheme. You can only configure up to 254 hosts on your network. Do not allocate the recommended number of IP addresses to the twinax subnets of 64 contiguous addresses. 15.6.5 Task Summary The following list is a high-level view of the tasks required to implement this scenario: 1. Plan the IP address scheme. 2. Configure As2.mycompany.com. 1. Build a DHCP configuration file. 2. Start and stop the IBM Network Station to complete the automatic setup of the workstation controller. 3. Configure the BOOTP/DHCP Relay Agent. 3. Configure As5.mycompany.com. 1. Build a DHCP configuration file. 2. Start and stop the IBM Network Station to complete the automatic setup of the workstation controller. 3. Configure the BOOTP/DHCP Relay Agent. 4. Configure the DHCP server on As1.mycompany.com. 15.6.6 Planning the IP Address Scheme The IP address scheme used in this scenario is the same one discussed in Section 15.2, “Transparent Subnet Masking” on page 352. Use the class C network 192.2.168.1.0 and split the address range of 254 host addresses into four contiguous segments. This scenario does not allocate the maximum TCP/IP address range of 64 contiguous addresses to each subnet. Therefore, future device and TCP/IP address additions to the twinax subnets are difficult. Allocate the maximum range of 64 IP addresses to each twinax subnet wherever possible. Note 385 Figure 338. Transparent Subnetting Class C Address Example The Local LAN has a network ID of 192.168.1.x and a mask of 255.255.255.0. This mask gives you the entire range of addresses to use in the last byte or octet of the address. However, within the DHCP configuration you are required to break up the range by using masks and to build a subnet group that ends the range of usable IP addresses at 190. The next group of addresses, Twinax 1, has a network ID of 192.168.1.192 and a mask of 255.255.255.224. The mask provides eight blocks of 32 contiguous addresses, of which only the block containing the range from 193 through 222 is used. It is the network ID of 192.169.1.192 that indicates to start at the subnet boundary of 192. Only the range of addresses from 192.168.1.193 through 192.168.1.222 is specified in the DHCP address pool. The third group, Twinax 2, has a network ID of 192.168.1.224 and a mask of 255.255.255.240. This mask provides 16 blocks of 16 contiguous IP addresses, of which only the block containing the range from 225 through 238 is used. It is this range that is specified in the DHCP address pool. The network address of 192.168.1.224 indicates that the first address is 192.168.1.225. The last group, Twinax 3, has the network ID of 192.168.1.240 and a mask of 255.255.255.240. This mask provides 16 blocks of 16 contiguous IP addresses, of which only the block containing the range from 241 through 254 is used. It is the network address of 192.168.1.240 that indicates to start at address 241. The valid range of addresses is from 241 through 254. Use all of this range in the DHCP address pool. The chart in Figure 339 is useful because it shows at a glance where each IP range begins and ends. It also shows which subnet mask is required to isolate a certain contiguous range of addresses. Figure 339. Address Ranges for the 192.168.1.0 Network Local Lan Twinax 1 Twinax 2 Twinax 3 .0 .191.192 .223 .224 .239.240 192.168.1.x Address space .255 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248 255 4 12 20 28 36 44 52 60 68 76 84 92 100 108 116 124 132 140 148 156 164 172 180 188 196 204 212 220 228 236 244 252 .128 .192 .224 .240 .248 .0 24 25 26 27 28 29 Dividing the 192.168.1.0 ~ 255 address space Number of contiguous bits: Mask Settings: 386 AS/400 TCP/IP DNS and DHCP Support 15.6.7 Configure As2.mycompany.com There are different tasks required to complete the configuration of As2.mycompany.com. 15.6.7.1 Build a DHCP Configuration File As2.mycompany.com is running the BOOTP/DHCP Relay Agent, but first you need to build a DHCP server configuration file. This must be done to complete the automatic setup of the workstation controller TCP/IP interface on the system. You do not need to start the DHCP server, but the DHCP configuration file must exist. Refer to Section 15.4.3, “Configure the DHCP Server AS2 for Twinax Support” on page 366 for detailed instructions. The first configuration dialog is shown in Figure 340 on page 386. Figure 340. Twinax 1 Subnet (192.168.1.192) on AS2 Configuration Display Click the Address Pool tab. Reduce the address range down to 32 contiguous IP addresses by specifying a subnet mask of 255.255.255.224. Refer to Figure 341. 387 Figure 341. DHCP Server Configuration -- Subnet Mask Setting for the Twinax 1 Address Range Click OK. Note: The mask setting on the twinax interface determines the block of addresses for which the AS/400 system needs to Proxy ARP. In this example, the AS/400 system proxies for addresses 192.168.1.193 through 192.168.1.222. The associated local interface that is specified on the twinax interface (workstation controller TCP/IP interface) tells the IP stack which interface is proxying for the twinax subnet. There is no need to configure any options or lease times on As2.mycompany.com because you do not start the DHCP server. Note: Ensure that the DHCP server is in a stopped state once you have completed the configuration. Do not start the DHCP server. To be safe, disable the subnet on this system by right-clicking on the subnet and clicking on Disable. If you experience a failure on the primary DHCP server, however, the BOOTP/DHCP Relay Agent on AS2 can end, and the DHCP server can start to service the local subnet. If you are planning to end the BOOTP/DHCP Relay Agent and start the DHCP server in an emergency, we recommend that you complete the entire configuration for the twinax subnet specifying options, lease times, DNS servers, and kernel load source information. 15.6.7.2 Start and Stop the IBM Network Station Once you have built the DHCP configuration file, start the IBM Network Station. The AS/400 system builds the TCP/IP interface and line description for the workstation controller. • Power on the twinax-attached IBM Network Station. The message NS0510 System 192.168.1.193 contacted on the IBM Network Station is a good indication that the system has completed building the TCP/IP interface for the workstation controller. 388 AS/400 TCP/IP DNS and DHCP Support The IBM Network Station sits on this message, so you can power it off at this time. It does not have enough information to complete the boot process at this stage. Once the workstation controller interface is auto-configured, move on to the next step. 15.6.7.3 Configure the BOOTP/DHCP Relay Agent Once the system has automatically built the workstation controller TCP/IP interface, configure the BOOTP/DHCP Relay Agent. The relay agent forwards DHCP messages from the local workstation controller interface to the primary DHCP server on the main network. For detailed instructions on configuring and starting a BOOTP/DHCP Relay Agent, refer to Section 15.1.8, “Configure and Start the DHCP Server on AS2” on page 346. Refer to Figure 342 for a configuration example. Figure 342. As2.mycompany.com Relay Configuration Once you have saved the BOOTP/DHCP Relay Agent configuration, ensure that the server is started. 15.6.8 Configure As5.mycompany.com. There are different tasks required to complete the configuration of As5.mycompany.com. 15.6.8.1 Build a DHCP Configuration File As5.mycompany.com is running the BOOTP/DHCP Relay Agent, but first you must build a DHCP server configuration file to complete the automatic setup of If you are curious and want to see what the automatic configuration did, use the AS/400 CFGTCP command and specify option 1, Work with TCP/IP interfaces. There should now be an interface with the type of *TDLC. This is the TCP/IP interface of the workstation controller. Tip 389 the workstation controller TCP/IP interface on the system. You do not need to start the DHCP server, but the DHCP configuration file must exist. Refer to Section 15.4.3, “Configure the DHCP Server AS2 for Twinax Support” on page 366 for detailed instructions. The first configuration dialog is shown in Figure 343 on page 389. Figure 343. Twinax 2 Subnet (192.168.1.224) on AS5 Configuration Click the Address Pool tab. Reduce the address range down to 16 contiguous IP addresses by specifying the subnet mask of 255.255.255.240 (see Figure 344). Figure 344. AS5 DHCP Server Configuration -- Subnet Mask Settings for the Twinax 2 Address Range 390 AS/400 TCP/IP DNS and DHCP Support Click OK. Repeat the same steps for the second twinax subnet on this server (Twinax 3), 192.168.1.240. See Figure 345 and Figure 346 for configuration examples. Figure 345. Twinax 3 Subnet (192.168.1.240) on AS5 Configuration Click the Address Pool tab. Reduce the address range down to 16 contiguous IP addresses by specifying the subnet mask of 255.255.255.240. Refer to Figure 346. Figure 346. DHCP Server Configuration -- Subnet Mask Setting for the Twinax 3 Address Range Click OK. Because the DHCP server is not started, there is no need to configure any options or lease times on As5.mycompany.com for the twinax subnets. 391 Note: Ensure that the DHCP server is in a stopped state once you have completed the configuration. Do not start the DHCP server. To be safe, disable the subnet on this system by right-clicking on the subnet and clicking on Disable. In the event of a failure on the primary DHCP server, however, you can end the BOOTP/DHCP Relay Agent on AS5 and start the DHCP server to service the local subnet. If you ever want to end the BOOTP/DHCP Relay Agent and start the DHCP server in an emergency, complete the entire configuration for the twinax subnet by specifying options, lease times, DNS servers, and kernel load source information. Note: The mask setting on the twinax interface determines the block of addresses for which the AS/400 system needs to Proxy ARP. In these two examples, the AS/400 system proxies for addresses 192.168.1.225 through 192.168.1.254. The associated local interface that is specified on the twinax interface (workstation controller TCP/IP interface) tells the IP stack which interface is proxying for the twinax subnet. 15.6.8.2 Start and Stop the IBM Network Station Once you have built the DHCP configuration file, start the IBM Network Station. The AS/400 system builds the TCP/IP interface and line description for the workstation controller. Power on the IBM Network Station that is located on the Twinax 2 subnet (192.168.1224). The message NS0510 System 192.168.1.225 contacted on the IBM Network Station indicates that the system has completed building the TCP/IP interface for the workstation controller. The IBM Network Station sits on this message, so you can power it off again at this time. It does not have enough information to complete the boot process at this stage. Power on the IBM Network Station that is located on the Twinax 3 subnet (192.168.1.240). Once the workstation controller interface is auto-configured, move on to the next step. 15.6.8.3 Configure the BOOTP/DHCP Relay Agent Once the system automatically builds the workstation controller TCP/IP interface, configure the BOOTP/DHCP Relay Agent. The relay agent forwards DHCP messages from the local workstation controller interface to the primary DHCP server on the main network. For detailed instructions on configuring and starting a BOOTP/DHCP Relay Agent, refer to Section 15.5.8, “Configure and Start BOOTP/DHCP Relay Agent on Local AS/400 System(AS2)” on page 376. Figure 347 shows a BOOTP/DHCP Relay Agent configuration example for the twinax subnet 2 (192.168.1.224). 392 AS/400 TCP/IP DNS and DHCP Support Figure 347. As5.mycompany.com Relay Definition for Twinax 2 (192.168.1.224) Figure 348 shows a BOOTP/DHCP Relay Agent configuration example for the Twinax subnet 3 (192.168.1.240). Figure 348. As5.mycompany.com Relay Configuration for Twinax 3 (192.168.1.240) Once you have saved the BOOTP/DHCP Relay Agent configuration, ensure that the server is started. 15.6.9 Configure the DHCP Server on As1.mycompany.com When you configure the DHCP server As1.mycompany.com to service the entire network (including the three twinax subnets), you need to break up the main address range, 192.168.1.x, into separate pools. Define the address pool or range from which the DHCP clients in the main LAN are serviced. This is the pool that services clients or hosts that are directly connected to the token-ring LAN (excluding the twinax subnets). The best way to demonstrate this is to show you what not to do. Figure 349 on page 393 is an example of what one might expect the configuration to be. Specify a subnet of 192.168.1.0 and a mask of 255.255.255.0, then change the address range to start at 192.168.1.1 and finish at 192.168.1.190. This appears to leave the range from 192.168.1.192 through 192.168.1.254 out of the main pool to be 393 used for the twinax subnets, but this appearance is incorrect. This is because the twinax subnet address range falls within this address space. For example, although twinax subnet 1 (192.168.1.192) is outside of the specified range, it is part of the address space 192.168.1.0 with a mask of 255.255.255.0, as shown in Figure 349 on page 393. The server, which always tries its best to serve an address to the client, gives an IP address from this pool to the client. This occurs even though you have defined the address pools for the twinax subnets. Figure 349. An Example of What Not to Do in this Scenario The correct way to configure the main address pool (192.168.1.1 through 192.168.1.190) is to lie to the DHCP server by breaking down the address range into subnet groups and using more restrictive masks. You must break up the address range from 192.168.1.1 through 192.168.1.254 into two groups (in this case) by applying masks to the range within the DHCP configuration. You then need to group the two groups back together within the DHCP configuration to form one pool. You also need to use DHCP option 1 to specify and pass back to the client the correct mask to use on this subnet. To split the group into two pools, first apply the mask 255.255.255.128. This provides a range of 192.168.1.1 through 192.168.1.126. Next, apply the mask 255.255.255.192 to get the second group range of 192.168.1.129 through 192.168.1.190. Refer to Figure 350 on page 394 for a visual representation of the masking and grouping. Note: You cannot use the subnet boundary addresses. Therefore, you lose three IP addresses from this range. 394 AS/400 TCP/IP DNS and DHCP Support Figure 350. Applying Subnet Masks to Split Address Range 192.168.1.1 through 192.168.1.190 You must define the two pools in the DHCP configuration. Refer to Figure 351 for an example of the DHCP configuration of the first group. Refer to Figure 352 on page 395 for an example of the DHCP configuration of the second group. Figure 351. DHCP Configuration -- Dividing the Main LAN Address Pool with a Mask, Group #1 .128 .192 .224 .240 .248 .0 Mask Settings: 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 160 168 176 184 192 200 208 216 224 232 240 248 255 4 12 20 28 36 44 52 60 68 76 84 92 100 108 116 124 132 140 148 156 164 172 180 188 196 204 212 220 228 236 244 252 Mapping Subnet Mask Settings to Host Address Ranges 24 25 26 27 28 29 Number of contiguous bits: 395 Figure 352. DHCP Configuration -- Dividing the Main LAN Address Pool with a Mask, Group #2 For both of these groups, you need to select the Options tab and configure DHCP option 1 to pass the real mask to use on this network, which is 255.255.255.0. You also need to configure any other relevant options that clients on the main network require. The next step is to group the two address ranges together again to form one pool in the DHCP server configuration. To form a subnet group within the DHCP configuration, perform the following steps: 1. From the Operations Navigator DHCP configuration, right-click Global to open a context menu. 2. Select New Subnet Group. 3. Specify a valid description in the Name field. Blanks are not valid. 4. Highlight the first address group and click Add. Repeat this step for the second group. Note: Step 5 is optional. 5. Click the Address Order tab and click either In order or Balanced. In order is the default. Figure 353 on page 396 shows the subnet group in the DHCP server configuration. 396 AS/400 TCP/IP DNS and DHCP Support Figure 353. DHCP Server Configuration -- Subnet Group You now need to configure the Twinax subnets on the DHCP server. Because these subnets are not directly attached, you do not build a twinax subnet configuration. Instead, build a normal subnet pool. Figure 354 on page 397 shows the subnet ID and mask configuration that is used for the twinax subnet on As2.mycompany.com. Figure 355 on page 397 shows the subnet ID and mask configuration that is used for the twinax subnet on As5.mycompany.com. Figure 356 on page 398 shows the subnet ID and mask configuration that is used for the twinax subnet on As5.mycompany.com. Note: It is necessary to exclude the IP address of the workstation controller from the twinax subnet pool and to provide the following options: Option Value 1 Subnet Mask Twinax #1 255.255.255.224 Twinax #2 255.255.255.240 Twinax #3 255.255.255.240 3 Router This value is the IP address of the workstation controller Twinax #1 192.168.1.193 Twinax #2 192.168.1.225 Twinax #3 192.168.1.241 66 Server Name Twinax #1 192.168.1.193 Twinax #2 192.168.1.225 Twinax #3 192.168.1.241 67 Boot File name /QIBM/ProdData/NetworkStation/kernel Configuring option 66 in this manner requires running TFTP on all three of the AS400 systems. Therefore, the kernel file must be maintained on all three systems as well. This is better for performance reasons but you can use a single TFTP server for all three twinax subnets as well. Any TFTP server (ANY valid IP address) can be used. Note 397 It is also possible to load the terminal configurations setting for the IBM Network Stations from a central source, such as the DHCP server. Refer to 11.6, “Selecting the Bootstrap Host for the IBM Network Station” on page 252 for more information. Figure 354. DHCP Server Configuration Example for Twinax Subnet #1 on AS2 Figure 355. DHCP Server Configuration Example for Twinax Subnet #2 on AS5 398 AS/400 TCP/IP DNS and DHCP Support Figure 356. DHCP Server Configuration Example for Twinax Subnet #3 on AS5 15.6.10 Summary This scenario showed the techniques required to divide your address space into contiguous portions of IP addresses for twinax subnets. It described transparent subnet masking in detail. It also discussed how to split and to group IP address ranges in DHCP to support transparent subnetting and Proxy ARP, which allows connectivity for the twinax devices across your network. © Copyright IBM Corp. 1998 399 Chapter 16. Migrating BOOTP Servers to DHCP Bootstrap Protocol (BOOTP) provides a method for associating workstations with servers. It also provides a method for assigning workstation IP addresses and initial program load (IPL) sources. BOOTP is a TCP/IP protocol that allows a media-less workstation client (for example, an IBM Network Station) to request an IP address and the location of the initial code from a server on the network. PC-based clients, UNIX platforms, and others use the BOOTP protocol to gain an IP address and subnet mask to participate in the network. The BOOTP server listens on the well-known BOOTP server port 67. When a client sends a BOOTP request, it places its MAC address into the packet. The BOOTP server compares this MAC address against a preconfigured BOOTP table that has IP addresses and MAC address mapped together. If the server finds the MAC address of the client in the table, it replies to the client with the IP address and mask to use. To use the BOOTP boot method, you must record the MAC address of all the IBM Network Stations, PCs, and hosts that are using BOOTP. You must then assign each of them an IP address and specify those assignments in a BOOTP table. When you need to change the IP addresses, you can make the changes centrally on the table in the boot server. You do not need to make them individually on each client or host. Refer to IBM Network Station Manager Installation and Use, SC41-0664, for information on installing and configuring the IBM Network Station. 16.1 Considerations Prior to planning your migration from BOOTP to DHCP, consider if you truly need BOOTP support in your network at all. It makes sense to change the BOOT clients in your network to DHCP, discard the BOOTP table, and use a new DHCP configuration that suits your network. BOOTP clients do not lease IP addresses the same as DHCP clients do. Instead, the DHCP server assigns an infinite lease time to the IP address for the client. With DHCP, the client gives up the IP address when the lease time expires, and the server returns it to the pool for the next DHCP client that requests an IP address. This effectively lets the DHCP server support more physical devices than you have IP addresses for, although not all at the same time. It is possible that you have configured BOOTP to serve IBM Network Stations with information regarding the source from which to load its kernel and from which server it downloads the terminal configuration data. You must identify and reconfigure these options on the DHCP server to continue to support those devices with special needs. In the DHCP configuration, you can add options at a You must have *IOSYSCFG special authority to make changes to the BOOTP server. Note 400 AS/400 TCP/IP DNS and DHCP Support global, subnet, class, or client level. This allows you to pass an option to every IBM Network Station that is part of the class, such as IBMNSM2.0.0. This is simpler than the BOOTP configuration that requires you to add a single entry in the BOOTP table for every IBM Network Station. Operations Navigator detects the presence of a BOOTP table in your system when you start the DHCP server configuration. If the BOOTP table is found, the user is presented with a window to migrate the table to a DHCP server configuration. If you choose to migrate the BOOTP table, a migration program reads each record in the file, bypassing comment records (which start with a # character) and empty or blank records. Actual entry records are parsed apart through their BOOTP tags, and converted to valid DHCP configuration file client and option keyword values. Table 32 shows the mapping of BOOTP tags to DHCP configuration file keywords/options. Table 32. BOOTP Tags to DHCP Configuration Keywords/Options BOOTP tag and Description DHCP Configuration Keyword or Option ht= Hardware Type 1st parm of client keyword ha= Hardware Address 2nd parm of client keyword ip= IP Address 3rd parm of client keyword sa= Boot Server bootStrapServer % hd= Home Directory option 67 % bf= BOOTFILE option 67 Note: If both hd and bf exist, only one option 67 is created, and its value is the hd value with the bf value appended to it (if the hd value did NOT end with /, a / is appended to it prior to appending the bf value). sm= subnet mask option 1 to= time offset option 2 gw= Gateway or Router option 3 ts= Time Server option 4 ns= Name Server option 5 ds= Domain Server option 6 lg= Log Server option 7 cs= Cookie Server option 8 lp= LPR Server option 9 rl= Resource Locator Server option 11 bs= Boot File Size option 13 bt= BOOT_TYPE nothing comparable - ignored ignored Migrating BOOTP Servers to DHCP 401 16.2 Scenario 1: Migrating Existing BOOTP to a New DHCP Configuration The following two methods are available to gain network startup information with OS/400 V4R2: • BOOTP • DHCP You can use the DHCP server even if you currently use the BOOTP server on your AS/400 system. However, the BOOTP server and the DHCP Server cannot be active at the same time. The DHCP server recognizes BOOTP requests and services BOOTP clients if the server is configured to support BOOTP clients. You have the option to migrate from BOOTP to DHCP. This lets you take advantage of the more advanced and dynamic features of DHCP. This section describes how to migrate from the AS/400 BOOTP server to the DHCP server. 16.2.1 Scenario Objectives The objectives of this scenario are to: 1. Show how to migrate BOOTP client data to a new DHCP configuration. 2. Show how to migrate BOOTP client data to an existing DHCP configuration. 16.2.2 Existing Environment An example of an existing environment with IBM Network Stations attached to the BOOTP server is shown in Figure 357 on page 402. To check or change the parameter to support BOOTP clients from the DHCP server configuration display, right-click DHCP Server -- As1.mycompany.com to open a context menu and select Properties. Click the Client Support tab and ensure that BOOTP clients is checked. Note 402 AS/400 TCP/IP DNS and DHCP Support Figure 357. Example of an Existing Network with IBM Network Stations Attached Use the Work with BOOTP Table (WRKBPTBL) command to display the existing BOOTP table entries (see Figure 358). Figure 358. BOOTP Table Entries of an Existing Network Enter 5, Display (see Figure 358), to display the details for the selected BOOTP table entry. BOOTP Server Subnet A As1.mycompany.com 10.1.9.0 .2 255.255.255.0 Work with BOOTP Table System: AS1 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display Client Host MAC IP Opt Name Address Address ns04.mycompany.com 00.00.A3.78.56.41 10.1.9.14 ns05.mycompany.com 00.00.A5.88.45.23 10.1.9.15 ns06.mycompany.com 00.00.A7.33.23.12 10.1.9.16 ns07.mycompany.com 00.00.C1.42.51.17 10.1.9.17 5 NS01.mycompany.com 00.00.E5.68.37.96 10.1.9.11 NS02.mycompany.com 00.00.E7.95.35.11 10.1.9.12 NS03.mycampany.com 00.00.E9.73.10.90 10.1.9.13 Bottom F3=Exit F6=Print list F11=Set BOOTP Table Defaults F12=Cancel F17=Top F18=Bottom Migrating BOOTP Servers to DHCP 403 Figure 359. Display BOOTP Table Entry Details The migration of the client configuration data depends on whether you are migrating BOOTP support to a new DHCP configuration or to an existing DHCP server. 16.2.3 Migrating BOOTP to a New DHCP Configuration When you configure DHCP on a system without an existing configuration, Operations Navigator automatically starts the DHCP Configuration Wizard. Operations Navigator also supports the migration of the BOOTP configuration data into the new DHCP configuration. To start the DHCP configuration wizard, perform the following steps: 1. Start Operations Navigator. 2. Click As1.mycompany.com to select the system. 3. Double-click Network. 4. Double-click Server. 5. Double-click OS/400. 6. Double-click DHCP. This starts the DHCP configuration wizard. 7. The DHCP configuration wizard is displayed. If not, it is likely that a DHCP configuration already exists. To start the wizard and replace the existing configuration, see Chapter 11.4.3, “Configure DHCP Server through Operations Navigator” on page 243. 8. Click Next. 9. Select Yes on the question “Do you want to disable the BOOTP server now?” Display BOOTP Table Entry System: AS1 Network device: Client host name . . : NS01.mycompany.com MAC address . . . . . : 00.00.E5.68.37.96 IP address . . . . . : 10.1.9.11 Hardware type . . . . : 6 Network routing: Gateway IP address . : Subnet mask . . . . . : Boot: Type . . . . . . . . : ibmnsm File name . . . . . . : kernel File path . . . . . . : /QIBM/ProdData/NetworkStation Press Enter to continue. F3=Exit F12=Cancel 404 AS/400 TCP/IP DNS and DHCP Support Figure 360. Disabling the BOOTP Server 10.Answer No to the question “Do you want to add a new subnet to the DHCP server?” 11.Click Next. The DHCP Configuration Summary is displayed. Figure 361. New DHCP Configuration Summary 12.Click Finish to display the results of the migration (see Figure 362 on page 405). Migrating BOOTP Servers to DHCP 405 Figure 362. New DHCP Server Configuration after BOOTP Migration 16.2.4 Migrating BOOTP to an Existing DHCP Configuration You can migrate BOOTP configuration data to an AS/400 system that you have already configured to act as a DHCP server but that you have not yet started. The BOOTP server and the DHCP server cannot be active at the same time on the same system. You can disable the BOOTP server at any time after you have done the migration. The DHCP server can serve all the clients that the BOOTP server previously served. The DHCP server provides some additional functions as well. Through Operations Navigator, perform the following the steps: 1. Click As1.mycompany.com to select the system. 2. Double-click Network. 3. Double-click Server. 4. Double-click OS/400. 5. Double- click DHCP. This shows the existing DHCP configuration. 6. Select File. 7. Select Migrate BOOTP. 8. If your system has a BOOTP configuration file, the Migrate BOOTP Configuration Dialog display in Figure 363 on page 406 appears. It is possible that you have configured subnets within the DHCP server in which the address range you have specified includes currently active BOOTP clients. If so, the addresses used by the BOOTP clients have the lease time set to infinite or never expire when you migrate from BOOTP to DHCP. This means that the DHCP server does not hand out IP addresses that are in use by BOOTP clients after the migration from BOOTP to DHCP is complete. Note 406 AS/400 TCP/IP DNS and DHCP Support Figure 363. Migrate BOOTP Configuration Dialog 9. Specify the Bootstrap server IP address for the migrated clients to use. 10.Click OK. You see the DHCP Server configuration dialog with the client statements and an infinite lease time added for each client migrated, as shown here: Figure 364. DHCP Server Configuration with Client Statements Added 16.2.5 Summary This scenario demonstrated how to migrate an existing BOOTP table into a new DHCP server configuration. It demonstrated how to migrate a BOOTP configuration into an existing DHCP server. You cannot run both the DHCP server and BOOTP server at the same time. The DHCP configuration was built prior to the migration, but the DHCP server was never activated until the migration was complete. © Copyright IBM Corp. 1998 407 Chapter 17. DHCP Problem Determination This chapter provides as much information as possible about the various types of problems you might have with the IBM AS/400 DHCP support. It also provides some guidelines and suggestions for solving these problems. The problems and suggested solutions are divided into the following categories: • Performing basic troubleshooting • Starting and reading the DHCP logging utility • Starting and decoding the AS/400 communication trace • Resolving DHCP setup and installation problems • Resolving error messages • Resolving DHCP configuration problems • Resolving DHCP client problems 17.1 Performing Basic Troubleshooting Whenever you have a problem with your AS/400 DHCP server, perform the basic troubleshooting solutions that this section provides before you attempt more sophisticated solutions. Doing so can prevent you from creating larger problems with your AS/400 DHCP server. 17.1.1 Program Temporary Fixes (PTFs) Always make sure that you have the latest PTFs installed. Because a code defect can cause the problem you are having with your AS/400 DHCP server, you can save yourself time and aggravation by ensuring you have installed the latest PTFs. 17.2 Starting and Reading the DHCP Logging Utility The DHCP server has a logging feature that is helpful for problem determination. This section describes how to start, stop, and read the DHCP log. 17.2.1 Starting the DHCP Logging Utility To start the DHCP server logging facility, perform the following steps: 1. Start the DHCP Configuration utility window through AS/400 Operations Navigator by double-clicking the DHCP server. 2. From the pull-down menu, select File>Properties. 3. Click the Logging tab. To stop DHCP logging, uncheck the relevant check boxes. 408 AS/400 TCP/IP DNS and DHCP Support Figure 365. The Server Properties -- DHCP Server Logging Configuration in Operations Navigator 4. Ensure that the logging dialog looks the same as Figure 365. Check the Enable logging check box at the top of the dialog. At this stage, do not worry about the Accounting information and the Statistics check boxes. 5. Click OK to return to the DHCP Server Configuration window. 17.2.2 Reading the DHCP Log This section explains how to access the DHCP log. It also shows you the cycle and steps of the DHCP protocol as it serves network information to the DHCP clients. To locate the DHCP log, perform the following steps: 1. From the AS/400 Operations Navigator, select File Systems>Root>QIBM>UserData>OS400>DHCP. 2. The log file is called dhcpsd.log. There can be several DHCP logs, depending on the options you set (see Figure 365). The DHCP server automatically closes the log when it is full, appends a sequenced number to it, and opens a fresh log file. You can open the file by using a simple editor, such as Notepad. There are four basic steps that the client and server go through to request and bind a TCP/IP address. These steps are shown in Figure 366 on page 409. DHCP Problem Determination 409 Figure 366. The Four Steps of the DHCP Client/Server Protocol Table 33. Hexadecimal Value, DHCP Option Conversion Table Hex Value DHCP Option Number DHCP Option Description ’01’ 1 Subnet Mask. ’02’ 2 Time off set of the clients subnet in seconds from Coordinated Universal time. ’03’ 3 Router or gateway IP address listed in order of preference. ’04’ 4 IP addresses (in order of preference) of the time servers available to the client. ’05’ 5 IP addresses (in order of preference) of the IEN 116 name servers available to the client. ’06’ 6 IP addresses (in order of preference) of the Domain Name System servers available to the client. ’07’ 7 IP addresses (in order of preference) of the MIT-LCS UDP Log servers available to the client. ’08’ 8 IP addresses (in order of preference) of the Cookie, or quote-of-the-day servers available to the client. ’09’ 9 IP addresses in order of preference of line printer servers. ’0A’ 10 IP addresses (in order of preference) of the Imagen Impress servers available to the client. Step 1 Client sends a DHCPDISCOVER broadcast packet. DHCP Server is listening on port 67 for DHCP/BOOTP requests. Step 2 DHCPOFFER is broadcast to the client. DHCP client is listening on port 68 for an offer. Once the offer arrives the client must request the network options. Step 3 The client broadcasts a DHCPREQUEST. DHCPREQUEST received and address is bound to client. Step 4 DHCPACK is broadcast back to client. Client uses network information to start up. 410 AS/400 TCP/IP DNS and DHCP Support Table 34. Hexadecimal Value, DHCP Option Conversion Table, Continued Hex Value DHCP Option Number DHCP option description ’0B’ 11 IP addresses (in order of preference) of the Resource Location (RLP) servers available to the client. ’0C’ 12 Host name of the client (which may include the local domain name). ’0D’ 13 The length (in 512-octet blocks) of the default boot configuration file for the client. ’0E’ 14 The path name of the merit dump file in which the client's core image is stored if the client crashes. ’0F’ 15 Domain name that the client uses when resolving host names using the Domain Name System. ’10’ 16 IP address of the client's swap server. ’11’ 17 Path that contains the client's root disk. ’12’ 18 The extensions path option allows you to specify a string that can be used to identify a file that is retrievable using Trivial File Transfer Protocol (TFTP). ’13’ 19 Enable or disable forwarding by the client of its IP layer packets. ’14’ 20 Enable or disable forwarding by the client of its IP layer datagrams with non-local source routes. ’15’ 21 IP address-net mask pair used to filter datagrams with non-local source routes. ’16’ 22 Maximum size datagram the client will reassemble. The minimum value is 576. ’17’ 23 Default time-to-live (TTL) the client uses on outgoing datagrams. ’18’ 24 Timeout used to age Path Maximum Transmission Unit (MTU) values discovered by the mechanism described in RFC 1191. ’19’ 25 Table of MTU sizes to use in Path MTU discover as defined in RFC 1191. The minimum MTU value is 68. ’1A’ 26 Maximum Transmission Unit (MTU) to use on this interface. The minimum MTU value is 68. ’1B’ 27 Client assumes all subnets use the same Maximum Transmission Unit (MTU). A value of disabled means the client assumes some subnets have smaller MTUs. ’1C’ 28 Broadcast address used on the client's subnet. ’1D’ 29 Client performs subnet mask discovery using Internet Control Message Protocol (ICMP). ’1E’ 30 Client responds to subnet mask requests using Internet Control Message Protocol (ICMP). DHCP Problem Determination 411 Table 35. Hexadecimal Value, DHCP Option Conversion Table, Continued Hex Value DHCP option number DHCP option description ’1F’ 31 Client solicits routers using router discovery as defined in RFC 1256. ’20’ 32 Address to which a client transmits router solicitation requests. ’21’ 33 Destination address-router pairs (in order of preference) the client installs in its routing cache. The first address is the destination address; the second address is the router for the destination. ’22’ 34 Client negotiates the use of trailers when using Address Resolution Protocol (ARP). For more information, see RFC 893. ’23’ 35 Timeout for Address Resolution Protocol (ARP) cache entries. ’24’ 36 For an Ethernet interface, client uses IEEE 802.3 Ethernet encapsulation described in RFC 1042 or Ethernet V2 encapsulation described in RFC 894. ’25’ 37 Default time-to-live (TTL) the client uses for sending TCP segments. ’26’ 38 Interval the client waits before sending a keep-alive message on a TCP connection. 0 indicates the client does not send messages unless requested by the application. ’27’ 39 Client sends TCP keep-alive messages that contain an octet of garbage for compatibility with previous implementations. ’28’ 40 The client's Network Information Service (NIS) domain. ’29’ 41 IP addresses (in order of preference) of Network Information Service (NIS) servers available to the client. ’2A’ 42 IP addresses (in order of preference) of Network Time Protocol (NTP) servers available to the client. ’2B’ 43 Vendor specific information. See RFC2132 for more information. ’2C’ 44 IP addresses (in order of preference) of NetBIOS name servers (NBNS) available to the client. ’2D’ 45 IP addresses (in order of preference) of NetBIOS datagram distribution (NBDD) name servers available to the client. ’2E’ 46 Node type used for NetBIOS over TCP/IP configurable clients as described in RFC 1001 and RFC 1002. ’2F’ 47 NetBIOS over TCP/IP scope parameter for the client, as specified in RFC 1001/1002. ’30’ 48 IP addresses (in order of preference) of X Window System font servers available to the client. ’31’ 49 IP addresses (in order of preference) of systems running X Window System Display Manager available to the client. ’32’ 50 Used in a DHCPDISCOVER to allow the client to request an IP address. ’33’ 51 IP address lease time used in the DHCPDISCOVER and DHCPREQUEST packets. 412 AS/400 TCP/IP DNS and DHCP Support Table 36. Hexadecimal Value, DHCP Option Conversion Table, Continued Note: RFC 2132 contains details for these options, which you can find on the Internet at http://ds.internic.net/rfc/rfc2132.txt. Hex value DHCP option number DHCP option description ’34’ 52 Option overload used to indicate that the DHCP ’sname’ or ’file’ fields are being used to carry DHCP options. ’35’ 53 DHCP message type (1=Discover,2=Offer, 3=Request, 4=Decline, 5=Ack, 6=Nak, 7=Release and 8=Inform). ’36’ 54 Server Identifier used by the DHCP client to distinguish between lease offers. ’37’ 55 Used by the DHCP client to request values for specified configuration parameters. ’38’ 56 Used to convey an error message to the DHCP client in a DHCPNAK packet. ’39’ 57 Maximum DHCP message size the client will accept. ’3A’ 58 Interval between the time the server assigns an address and the time the client transitions to the renewing state. ’3B’ 59 Interval between the time the server assigns an address and the time the client enters the rebinding state. ’3C’ 60 Vendor class identifier. ’3D’ 61 Client identifier. ’3E’ 62 Netware/IP Domain Name. ’3F’ 63 A general purpose option code used to convey all the NetWare/IP related information except for the NetWare/IP domain name. ’40’ 64 Network Information Service (NIS)+ V3 client domain name. ’41’ 65 IP addresses (in order of preference) of Network Information Service (NIS)+ V3 servers available to the client. ’42’ 66 Trivial File Transfer Protocol (TFTP) server name used when the 'sname' field in the DHCP header has been used for DHCP options. ’43’ 67 Name of the bootfile when the 'file' field in the DHCP header has been used for DHCP options. ’44’ 68 IP addresses (in order of preference) of the mobile IP home agents available to the client. ’45’ 69 IP addresses (in order of preference) of the Simple Mail Transfer Protocol (SMTP) servers available to the client. DHCP Problem Determination 413 Table 37. Hexadecimal Value, DHCP Option Conversion Table, Continued 17.2.3 Finding the Incoming DHCPDISCOVER Data in the Log If you look in the log file of the DHCP server, you see the following information for step 1 in Figure 366 on page 409, the incoming DHCPDISCOVER packet. Use the Notepad find option and search on DHCPDISCOVER. The following steps walk you through the log data: Hex value DHCP option number DHCP option description ’46’ 70 IP addresses (in order of preference) of the Post Office Protocol (POP) servers available to the client. ’47’ 71 IP addresses (in order of preference) of the Network News Transfer Protocol (NNTP) servers available to the client. ’48’ 72 IP addresses (in order of preference) of the World Wide Web (WWW) servers available to the client. ’49’ 73 IP addresses (in order of preference) of the Finger servers available to the client. ’4A’ 74 IP addresses (in order of preference) of the Internet Relay Chat (IRC) servers available to the client. ’4B’ 75 IP addresses (in order of preference) of the StreetTalk servers available to the client. ’4C’ 76 IP addresses (in order of preference) of the StreetTalk Directory Assistance servers available to the client. ’4D’ 77 Specified by the client to indicate to the server what class the client is from. ’4E’ 78 A framework for passing configuration information to hosts using the Service Location Protocol. ’4F’ 79 A scope used by a service agent responding to Service Request messages specified by the Service Location Protocol. ’50’ 80 A naming authority specifying the syntax for schemes used in URLs used by entities with the Service Location Protocol. ’51’ 81 A non-dynamic IP client allows the DHCP server to update the client’s ’A’ (1=true, 0=false). In either case, the client also sends its fully qualified domain name in the DHCPREQUEST. 414 AS/400 TCP/IP DNS and DHCP Support Figure 367. DHCP Log Data with a DHCPDISCOVER Request 1. On line #4, you see the size in bytes on the incoming DHCPDISCOVER packet. • The IBM Network Station and OS/2 clients typically send through a packet of 548 bytes. • Windows 95 typically sends through a packet size of around 300 bytes. 2. Line 7, 8, 9, and 10 are the primeOptions that the client needs to know to join the network and boot up. The options in this case are as follows: Option 53 The DHCP message type in this case is a DHCPDISCOVER. Option 57 The maximum DHCP message size that the client is willing to accept. Option 77 The user class (IBMNSM 1.0.0.0 in this case). Option 60 The vendor class identifier. Note: To view the values of the options, you must run and decode an AS/400 communication trace. Refer to Section 17.3, “Starting, Formatting, and Decoding an AS/400 Communication Trace” on page 419, for more information. 3. Line #14 shows the message type, a DHCPDISCOVER. 4. Line #16 gives you the MAC address of the client. The number preceding the MAC address is the physical network layer code, as shown in the following examples: Type 1 Ethernet Type 6 Token ring Type 26 TwinAxial Note: PCs with a twinax card report hardware type 1. #1 16:21:35 : TRACE: .. receiveMailbox: DHCP comm descriptor selected #2 16:21:35 : TRACE: .. receiveMailbox: recvfrom got 548 bytes. #3 16:21:35 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE #4 16:21:35 : TRACE: Size of incoming packet is: 548 #5 16:21:35 : TRACE: .. process_bootrequest: function entered #6 16:21:35 : TRACE: .. process_bootrequest: received packet xid = 627 #7 16:21:35 : INFO: .... primeOptions: Option: 53, length:1 #8 16:21:35 : INFO: .... primeOptions: Option: 57, length:2 #9 16:21:35 : INFO: .... primeOptions: Option: 77, length:12 #10 16:21:35 : INFO: .... primeOptions: Option: 60, length:19 #11 16:21:35 : TRACE: .... identifiableClient: function entered #12 16:21:35 : TRACE: .... identifiableClient: Using htype, hlen and chaddr to id client #13 16:21:35 : TRACE: .... legibleRequest: function entered #14 16:21:35 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER #15 16:21:35 : TRACE: .. process_bootrequest: Request is self-consistent #16 16:21:35 : TRACE: Packet from client 6-0x0000e5683796 was accepted by user exit verification processing. #17 16:21:35 : TRACE: .. reply_generator: function entered #18 16:21:35 : TRACE: .... processDISCOVER: function entered DHCP Problem Determination 415 5. Line #18 shows that the DHCPDISCOVER function has been entered. 17.2.4 Finding and Reading the DHCPOFFER Information in the Log The DHCPOFFER packet is sent by the DHCP server in response to a DHCPDISCOVER packet that arrives on port 67. If the DHCP server has a valid subnet range of IP addresses defined for the network from which the DHCPDISCOVER packet originated, the DHCPOFFER is sent out. Figure 368 on page 416 details the DHCPOFFER being generated in the DHCP log. 416 AS/400 TCP/IP DNS and DHCP Support Figure 368. DHCP Log Data Showing the DHCPOFFER being Generated #1 16:23:56 : TRACE: .......... locateClientRecord: Located client 6-0x0000e5683796 in client records #2 16:21:35 : TRACE: ............ locateConfiguredClient: function entered #3 16:21:35 : TRACE: .............. pr_queryAddr: netaddr = 10.0.0.0 #4 16:21:35 : TRACE: .............. pr_queryAddr: hostaddr = 0.1.1.3 #5 16:21:35 : TRACE: ............ locateConfiguredClient: look for client match in this subnet #6 16:21:35 : TRACE: ............ locateConfiguredClient: look for client match in global clients #7 16:21:35 : TRACE: ........ am_queryClient: Client 6-0x0000e5683796 is known to address mapper, status=4 #8 16:21:35 : TRACE: .... processDISCOVER: AM_STATUS_BOUND #9 16:21:35 : WARNING:.... processDISCOVER: DISCOVER from client 6-0x0000e5683796 already bound with 10.1.1.3 #10 16:21:35 : TRACE: ............ pr_check_subnet_movement: Comparing requested ip 10.1.1.3 & subnetmask 255.255.255.0 against subnet 10.1.1.0 #11 16:21:35 : TRACE: ............ isAddressInUse: Function Entered #12 16:21:37 : TRACE: ............ isAddressInUse: IP address 10.1.1.3, not in use. rc=-26758468 #13 16:21:37 : TRACE: ............ locateAddressRecord: function Entered #14 16:21:37 : INFO: .......... am_addressClient: Client 6-0x0000e5683796 suggested 10.1.1.3 is in range #15 16:21:37 : INFO: .......... am_addressClient: Client 6-0x0000e5683796 had 10.1.1.3 mapped previously #16 16:21:37 : TRACE: .......... indexAddressRecord: function Entered #17 16:21:37 : ACTION: .... processDISCOVER: Address 10.1.1.3 has been reserved #18 16:21:37 : TRACE: .......... pr_queryAddr: netaddr = 10.0.0.0 #19 16:21:37 : TRACE: .......... pr_queryAddr: hostaddr = 0.1.1.3 #20 16:21:37 : TRACE: ........ locateConfiguredClient: look for client match in this subnet #21 16:21:37 : TRACE: ........ locateConfiguredClient: look for client match in global clients #22 16:21:37 : TRACE: ........ pr_queryAddr: function entered #23 16:21:37 : TRACE: ........ pr_queryAddr: clue = [0x0a010103], 167837955 #24 16:21:37 : TRACE: ........ pr_queryAddr: netaddr = 10.0.0.0 #25 16:21:37 : TRACE: ........ pr_queryAddr: hostaddr = 0.1.1.3 #26 16:21:37 : TRACE: ........ locateAddressRecord: function Entered #27 16:21:37 : TRACE: .. generate_bootreply: function entered #28 16:21:37 : INFO: .. generate_bootreply: Generating a DHCPOFFER reply #29 16:21:37 : TRACE: .... locateConfiguredClient: function entered #30 16:21:37 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 1. #31 16:21:37 : TRACE: .. transmitMailbox: transmitting to (10.1.1.3 #68) DHCP Problem Determination 417 • Line #1 in the log, locateClientRecord, is an internal function of the DHCP server. The server compares the MAC address of the client to see if it already knows about the client. • Line #7, am_queryClient, has found the address in the mapper records. This client has queried this DHCP server before. • Line #9 tells you that the client has requested the same address it used last time. • Line #10 is the comparison of the requested IP address from the client against the configured subnet pool on the DHCP server. • Line #12, isAddressInUse, checks if the requested address has already been leased to another client. • Lines #14 and #15 tell you that everything is currently satisfactory, that the address is in the subnet pool range, and that the internal address mapper remembers the client had this address last time. • Line #17, processDISCOVER, reserves the address so it can offer it to the client. • Line #28, generate_bootreply, is the generation of the offer. • Line #31, transmitMailbox, sends the offer to the client or relay agent. The IP address and port ID are listed here. 17.2.5 Finding and Reading the DHCPREQUEST and DHCPACK Information The client sends the DHCPREQUEST after it receives a DHCPOFFER, and the client requests the information that the DHCP server supplied. The client can also query the DHCP server for additional options. Once the DHCP server receives the DHCPREQUEST, it issues a DHCPACK to tell the client to use the supplied IP address and network options. If there are multiple DHCP servers, it is this request broadcasted back to the selected server that tells the other DHCP servers to release the address they had offered and reserved for the client because they have not been selected. Figure 369 on page 418 shows the log data for the incoming DHCPREQUEST. The DHCPACK is then generated. Once again, the log has been cut to improve clarity. Use the Notepad find option and search on DHCPREQUEST. The following steps walk you through the sequence in the log file: 418 AS/400 TCP/IP DNS and DHCP Support Figure 369. DHCP LOG Data with the Incoming DHCPREQUEST and a DHCPACK being Generated #1 16:23:56 : TRACE: .. receiveMailbox: DHCP comm descriptor selected #2 16:23:56 : TRACE: .. receiveMailbox: recvfrom got 548 bytes. #3 16:23:56 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE #4 16:23:56 : TRACE: Size of incoming packet is: 548 #5 16:23:56 : TRACE: .. process_bootrequest: function entered #6 16:23:56 : TRACE: .. process_bootrequest: received packet xid = 627 #7 16:23:56 : INFO: .... primeOptions: Option: 53, length:1 #8 16:23:56 : INFO: .... primeOptions: Option: 50, length:4 value: 167837955 (0x0a010103) #9 16:23:56 : INFO: .... primeOptions: Option: 54, length:4 value: 167837953 (0x0a010101) #10 16:23:56 : INFO: .... primeOptions: Option: 57, length:2 #11 16:23:56 : INFO: .... primeOptions: Option: Parameter Request List, length:12 #12 16:23:56 : INFO: .... primeOptions: Option 66 requested #13 16:23:56 : INFO: .... primeOptions: Option 67 requested #14 16:23:56 : INFO: .... primeOptions: Option 3 requested #15 16:23:56 : INFO: .... primeOptions: Option 6 requested #16 16:23:56 : INFO: .... primeOptions: Option 2 requested #17 16:23:56 : INFO: .... primeOptions: Option 4 requested #18 16:23:56 : INFO: .... primeOptions: Option 12 requested #19 16:23:56 : INFO: .... primeOptions: Option 28 requested #20 16:23:56 : INFO: .... primeOptions: Option 31 requested #21 16:23:56 : INFO: .... primeOptions: Option 49 requested #22 16:23:56 : INFO: .... primeOptions: Option 48 requested #23 16:23:56 : INFO: .... primeOptions: Option 15 requested #24 16:23:56 : INFO: .... primeOptions: Option: 77, length:12 #25 16:23:56 : INFO: .... primeOptions: Option: 60, length:19 #26 16:23:56 : TRACE: .... identifiableClient: function entered #27 16:23:56 : TRACE: .... identifiableClient: Using htype, hlen and chaddr to id client #28 16:23:56 : TRACE: .... legibleRequest: function entered #29 16:23:56 : TRACE: .... legibleRequest: DHCP msg type DHCPREQUEST #30 16:23:56 : TRACE: .. process_bootrequest: Request is self-consistent #31 16:23:56 : TRACE: Packet from client 6-0x0000e5683796 was accepted by user exit verification processing. #32 16:23:56 : TRACE: ...... locateExchange: function entered #33 16:23:56 : TRACE: ...... locateExchange: Client id matches an active exchange #34 16:23:56 : TRACE: ...... pr_check_subnet_movement: Comparing requested ip 10.1.1.3 & subnetmask 255.255.255.0 against subnet 10.1.1.0 #35 16:23:56 : TRACE: ...... locateConfiguredClient: function entered #36 16:23:56 : TRACE: ...... locateConfiguredClient: look for client match in this subnet #37 16:23:56 : TRACE: ...... locateConfiguredClient: look for client match in global clients #38 16:23:56 : TRACE: .... processREQUEST: Offer was selected by client 6-0x0000e5683796 #39 16:23:56 : TRACE: ...... addressManager: Function entered #40 16:23:56 : TRACE: .... processREQUEST: Address 10.1.1.3 has been bound to 6-0x0000e5683796 #41 16:23:57 : TRACE: .. generate_bootreply: function entered #42 16:23:57 : INFO: .. generate_bootreply: Generating a DHCPACK reply DHCP Problem Determination 419 1. Line #4 shows the incoming packet and size. 2. Line #6 shows the transaction ID. 3. Lines #7 to #25 are the incoming options that the client wants to resolve. The first four options, #53, #50, #54, and #57 must be completed. The client wants the rest of the options but can attach to the network without them. 4. Line #33, locateExchange, compares the transaction ID and verifies that there is an active exchange between this server and the client. 5. Line #40, processREQUEST, binds the IP address to the client’s MAC address and sets the address to In-use. 6. The last line, generate_bootreply, generates the DHCPACK packet and sends it to the client or relay agent. 17.3 Starting, Formatting, and Decoding an AS/400 Communication Trace This section contains instruction on how to start, stop, and format trace data that is collected by the AS/400 communication trace facility. This section also shows the method to decode the data within the trace if you want to see what options are being passed to and from the DHCP clients and servers. 17.3.1 Start the AS/400 Communication Trace To start the AS/400 communications trace, perform the following steps: 1. From the AS/400 command line, enter the STRSST command. This starts the System Service Tools. 2. Enter option 1 to start a service tool. 3. Enter option 3 to work with a communications trace. 4. Press F6 to start a communication trace. Unfortunately, the DHCP log does not show which options the DHCP server supplied to the client. You must interrogate the AS/400 communications trace to find that information. See Section 17.3.3, “Reading and Decoding the AS/400 Communications Trace Data” on page 421 for more information. You can also use the QIBM_QTOB_DHCP_ABND user exit program to retrieve the options. Note 420 AS/400 TCP/IP DNS and DHCP Support 5. Fill in the name of your line, the buffer size, and a description. Leave the rest of the options as defaults. 6. Press Enter. 7. On the Trace Options display, select option 1 to gather all the data without filtering. 8. Press Enter. The trace is now active. 17.3.2 Stopping the AS/400 Communication Trace Once the problem has been recreated, stop the AS/400 communications trace as soon as possible. To end the communications trace, perform the following steps: 1. From the AS/400 command line, enter the command STRSST. This starts the System Service tools. 2. Enter option 1 to start a service tool. 3. Enter option 3 to work with a communications trace. 4. Enter a 2 to stop the trace. 5. Enter 6 to format and print the trace. 6. You are now prompted for the format information. Change the data format to ASCII and format the broadcast data and TCP/IP data only, as shown in the following display: Start Trace Type choices, press Enter. Configuration object . . . . . . . TRNLINE1 Type . . . . . . . . . . . . . . 1 1=Line, 2=Network interface 3=Network server Trace description . . . . . . . . Tracing DHCP Buffer size . . . . . . . . . . . 4 1=128K, 2=256K, 3=2M, 4=4M 5=6M, 6=8M, 7=16M, 8=32M 9=64M Stop on buffer full . . . . . . . N Y=Yes, N=No Data direction . . . . . . . . . 3 1=Sent, 2=Received, 3=Both Number of bytes to trace: Beginning bytes . . . . . . . . *CALC Value, *CALC Ending bytes . . . . . . . . . *CALC Value, *CALC F3=Exit F5=Refresh F12=Cancel DHCP Problem Determination 421 7. Press Enter. 8. You are now asked if you want to filter out TCP/IP addresses. This is unnecessary in this example because you have a controlled environment. If you have a busy network and you know the clients IP address, then specify it here. 9. Press Enter. The trace is being formatted. This can take some time, depending on how large the sample is. 10.Once the trace is formatted, you are returned to the Work with Communications trace display, and the line trace is in a stopped state. It is okay to leave the trace stopped as long as it is not running then. You can always return to this display and format the trace again if you desire to do so. However, if you delete it, you must start over to gather the data. 11.Exit System Service Tools by following the prompts. 12.To find the formatted trace data, enter the WRKSPLF command. 17.3.3 Reading and Decoding the AS/400 Communications Trace Data This section discusses the AS/400 communications trace data that you captured and formatted in the previous steps. This section concentrates only on reading the first discovery packet that the client sent. The method used to decode the data and find the options being sent or received is the same. To located and decode the DHCP line trace data, follow these steps: 1. Locate the spooled file and view the data online. Online viewing allows you to search for keywords more quickly. 2. Perform a find on BOOTPS, and press F16 (Shift + PF4) to search. Figure 370 on page 422 shows the result. Format Trace Data Configuration object . . . . : TRNLINE1 Type . . . . . . . . . . . . : LINE Type choices, press Enter. Controller . . . . . . . . . . *ALL *ALL, name Data representation . . . . . 1 1=ASCII, 2=EBCDIC, 3=*CALC Format RR, RNR commands . . . N Y=Yes, N=No Format Broadcast data . . . . Y Y=Yes, N=No Format MAC or SMT data only . N Y=Yes, N=No Format UI data only . . . . . N Y=Yes, N=No Format SNA data only . . . . . N Y=Yes, N=No Format TCP/IP data only . . . Y Y=Yes, N=No Format IPX data only . . . . N Y=Yes, N=No F3=Exit F5=Refresh F12=Cancel 422 AS/400 TCP/IP DNS and DHCP Support Note: It is common for the DHCP client to send out multiple DHCPDISCOVER packets before it receives a DHCPOFFER. Figure 370. DHCP Boot Request in the Communication Trace 3. The information required for problem determination in the DHCP boot request is described in detail, as follows: • The first byte in the data, ’01’, states that this is a request. 02 is a reply. • The source and destination ports are shown. This packet is from the DHCP client. The server always listens on port 67, and the client always listens on port 68. • The magic cookie in hex ’63 82 53 63’ signifies the start of the DHCP options. This is defined in RFC 2132. • The byte ’FF’ in hex signals the end of the DHCP options. All options that are defined in RFC 2132 start with the options code in hex. The next byte states the length of the data. 4. The first three bytes after the magic cookie are as follows: hex ’35 01 01’, which is converted to decimal; hex ’35’, which is option number 53 (refer to Table 33 on page 409); and ’01’, which is the length of the following data that contains the value ’01’. The following list shows that ’01’ is a DHCPDISCOVER. RFC2132 states that this option must have a length of one byte and that the type is also one byte. The value of the type byte for option 35 is as follows: 01 DHCPDISCOVER 02 DHCPOFFER 03 DHCPREQUEST 04 DHCPDECLINE 05 DHCPACK Source and Destination ports DHCP Class Packet ID Magic Cookie Options end DHCP Problem Determination 423 06 DHCPNAK 07 DHCPRELEASE 08 DHCPINFORM 5. The next four bytes in hex are ’39 02 02 40’. Hex ’39’ in decimal is 57. The RFC states that option 57 is the maximum DHCP message size that the client accepts. The second byte (hex ’02’) is the length of the option, and the last two bytes are expressed as an unassigned, 16-bit integer. Converting hex ’02 40’ to decimal produces 576 bytes, which is the maximum length that the client accepts. The RFC also states that this is the minimum length as well. 6. The next group of bytes are hex ’4D 0C 49 42 4D 4E 53 4D 20 31 2E 30 2E 30’. The second byte always states the length, so you must get the next 12 bytes. The first byte hex ’4D’ is 77 in decimal, Option 77 is the user class that clients use to indicate to DHCP servers the class of which they are a member. • The remaining 12 bytes converted to decimal are as follows: 73 66 77 78 83 77 32 49 46 48 46 48 • Convert them to ASCII, and you get the following values: I B M N S M 1 . 0 . 0 The trace data on the right has been converted to ASCII. This is also one of the user classes, which is defined automatically when you build the DHCP configuration. 7. The next group of bytes in hex are ’3C 13 49 42 4D 20 4E 65 74 77 6F 72 6B 20 53 74 61 74 69 6F 6E’. Hex ’3C’ converted to decimal is option 60, the vendor class identifier. When the next byte (hex ’13’) is converted to decimal, it states that the length is 19 bytes. Once the rest of the string has been converted to decimal and applied to the ASCII code table, you can see that the vendor information being passed has the value of IBM Network Station. This is also shown on the right in ASCII in the trace data. 8. The next byte in hex is ’FF’. This indicates the end of the user options. 17.4 Symptoms, Problems, and Resolutions 17.4.0.1 Symptom: DHCP Client cannot ping hosts on the network. A Windows 95 DHCP client has loaded its IP stack without error but cannot ping other hosts on the network. A ping to the loop back address works, confirming that the IP stack is functioning. Possible cause: No subnet mask was supplied to the client, or an unacceptable TCP/IP address was given. Verify: On the Windows 95 client, run the executable WINIPCFG.EXE. This displays the IP address and the subnet mask that the client is using. Solution: If the address or the mask is not valid, use Operations Navigator to check the DHCP server configuration. To further resolve the problem, view the DHCP logging information as described in Section 17.2, “Starting and Reading the DHCP Logging Utility” on page 407. It might then be necessary to view the options that are being passed to the DHCP client by using the AS/400 communication tracing facility. This is described in 424 AS/400 TCP/IP DNS and DHCP Support Section 17.3, “Starting, Formatting, and Decoding an AS/400 Communication Trace” on page 419. 17.4.0.2 Symptom: DHCP client cannot ping hosts on different subnet The DHCP client can ping clients in the local subnet but fails to ping clients on remote subnet. Possible cause: No router information was supplied or the wrong router IP address was configured. Solution: Verify and option 3 (router) and configure it properly. 17.4.0.3 Symptom: DHCP Server not forwarding to DHCP relay. The DHCP relay appears to be forwarding DHCP messages to the DHCP server. Further, the DHCP server generates replies and sends them back to the BOOTP/DHCP Relay Agent. However, the BOOTP/DHCP Relay Agent never gets the message or sends it to the client. Diagnostics: Check the BOOTP/DHCP Relay Agent log file first to determine if the relay agent is performing the forwarding to and from the server correctly. The log can be found in the directory /As5.mycompany.com/QIBM/UserData/OS400/DHCP/dhcprd.log Figure 371 is an example from the BOOTP/DHCP Relay Agent log. It shows broadcasted DHCP messages from subnet 10.1.2.0 being sent to the DHCP server at address 10.1.0.2 on port 67. Figure 371. BOOTP/DHCP Relay Agent Forwarding to DHCP Server Log File Extract Figure 372 is a working example of the BOOTP/DHCP Relay Agent log showing the returned DHCP message from the DHCP server being forwarded to the client. 02/12 10:42:49 : TRACE: Size of incoming packet is: 548 02/12 10:42:49 : TRACE: .. process_incoming_msg: function entered 02/12 10:42:49 : TRACE: .... relay_to_server: function entered 02/12 10:42:49 : INFO: .... relay_to_server: assign giaddr as 167838211 02/12 10:42:49 : ACTION: .... relay_to_server: Relay packet from interface 10.1.2.3 to server to at 10.1.0.2 02/12 10:42:49 : TRACE: ...... transmitMailbox: transmitting to (10.1.0.2 #67) 02/12 10:42:49 : TRACE: ........ setSendWithoutARP: Entering setSendWithoutARP, value 0. DHCP Problem Determination 425 Figure 372. BOOTP/DHCP Relay Agent Forwarding to DHCP Client Log File Extract If the line with the statement relay_to_client does not appear in the log file, this indicates that the BOOTP/DHCP Relay Agent is not receiving a reply from the DHCP server. Verify that the DHCP server log is receiving messages from the BOOTP/DHCP Relay Agent. Also verify that the DHCP server is transmitting the DHCP messages back to the BOOTP/DHCP Relay Agent. Figure 373 is an extract from the DHCP server log. It shows the incoming DHCPDISCOVER message from the BOOTP/DHCP Relay Agent. Figure 373. DHCP Server Log Extract with Incoming Transmission from the BOOTP/DHCP Relay Agent The line that reads Relay agent on 10.1.2.3 is 1 from the client tells you that the client’s DHCP message has travelled through one hop, or one BOOTP/DHCP Relay Agent. Figure 374 is another extract from the DHCP server log that shows the response being transmitted to the BOOTP/DHCP Relay Agent. 02/12 10:42:54 : TRACE: Size of incoming packet is: 548 02/12 10:42:54 : TRACE: .. process_incoming_msg: function entered 02/12 10:42:54 : TRACE: .... relay_to_client: function entered 02/12 10:42:54 : TRACE: ........ setSendWithoutARP: Entering setSendWithoutARP, value 1. 02/12 10:42:54 : ACTION: .... relay_to_client: unicast reply to client 02/12 10:42:54 : TRACE: ...... transmitMailbox: transmitting to (10.1.2.4 #68) 02/12 10:42:54 : TRACE: ........ setSendWithoutARP: Entering setSendWithoutARP, value 0. 02/25 09:26:35 : TRACE: .. receiveMailbox: DHCP comm descriptor selected 02/25 09:26:35 : TRACE: .. receiveMailbox: recvfrom got 548 bytes. 02/25 09:26:35 : TRACE: .. receiveMailbox: SELECT_SEMAPHORE 02/25 09:26:35 : TRACE: Size of incoming packet is: 548 02/25 09:26:35 : TRACE: .. process_bootrequest: function entered 02/25 09:26:35 : TRACE: .. process_bootrequest: received packet xid = b03 02/25 09:26:35 : INFO: .... primeOptions: Option: 53, length:1 02/25 09:26:35 : INFO: .... primeOptions: Option: 57, length:2 02/25 09:26:35 : INFO: .... primeOptions: Option: 77, length:12 02/25 09:26:35 : INFO: .... primeOptions: Option: 60, length:19 02/25 09:26:35 : TRACE: .... identifiableClient: function entered 02/25 09:26:35 : TRACE: .... identifiableClient: Using htype, hlen and chaddr to id client 02/25 09:26:35 : TRACE: .... legibleRequest: function entered 02/25 09:26:35 : INFO: .... legibleRequest: Relay agent on 10.1.2.3 is 1 from the client 02/25 09:26:35 : TRACE: .... legibleRequest: DHCP msg type DHCPDISCOVER 426 AS/400 TCP/IP DNS and DHCP Support Figure 374. DHCP Server Log Extract with Outgoing Transmission to the BOOTP/DHCP Relay Agent Possible Cause: No route is configured on the DHCP server to return the DHCP messages to the relay agent. The DHCP server sends the DHCP messages back to the interface on which the DHCP relay is listening for DHCP broadcasts. The DHCP server log informs you that the message has been sent, but the BOOTP/DHCP Relay Agent does not have a log entry, as shown in Figure 372. Refer to Figure 375 on page 426, which shows the message flow. The BOOTP/DHCP Relay Agent intercepts the broadcasted DHCP messages and forwards them directly to the DHCP server through interface 10.1.0.4. The replies from the DHCP server are sent to interface 10.1.2.3 on the BOOTP/DHCP Relay Agent. This is because the BOOTP/DHCP Relay Agent places its IP address from the interface on the subnet that received the DHCP broadcast. It does this so that the DHCP server can tell which subnet the client is on and serve the correct IP address. Figure 375. BOOTP/DHCP Relay Agent Message Flow. 02/25 09:26:35 : INFO: .. generate_bootreply: Generating a DHCPOFFER reply 02/25 09:26:35 : TRACE: .. transmitMailbox: transmitting to (10.1.2.3 #67) 02/25 09:26:35 : TRACE: .... setSendWithoutARP: Entering setSendWithoutARP, value 0. DHCP Server A 10.1.0.0 255.255.254.0 10.1.2.0 255.255.255.0 BOOTP/DHCP relay agent Always relays to DHCP server A .4 .2 .3 DHCP Broadcast message DHCP broadcast message forwarded to server as unicast. DHCP reply from server sent to 10.1.2.3 DHCP Problem Determination 427 Verify: To verify that there is no route to subnet 10.1.2.0, attempt to ping interface 10.1.2.3 from the DHCP server. A negative response indicates that the DHCP server does not have visibility to the subnet. Solution: If the ping failed, you need to add routing information on the DHCP server so that it can access subnet 10.1.2.0. Alternatively, you can enable RIP on the AS/400 systems to advertise the route to the subnet. To configure a TCP/IP route, perform the following steps: 1. From the AS/400 command line, specify CFGTCP and press Enter. 2. Select option 2, Work with TCP/IP routes, and press Enter. 3. Enter a 1 to add routing information. See the following display for details: 17.4.0.4 Symptom: IBM Network Stations not starting through DHCP. The twinax-attached IBM Network Stations do not load the kernel and complete a boot up. Possible cause: There are many factors that can hinder the startup of a twinax-attached IBM Network Station. The DHCP server can be configured incorrectly, or it might not be started. The options passed to the client might be Once the DHCP client has accepted an offer from the DHCP server, lease renewals for the client’s IP address are sent directly to the DHCP server’s IP address. Lease renewals are not broadcast and, therefore, not forwarded by a BOOTP/DHCP Relay Agent. Valid routing information must exist within a subnetted network. Note Work with TCP/IP Routes System:As1.mycompany.com Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display Route Subnet Next Preferred Opt Destination Mask Hop Interface _ _______________ _______________ _______________ _ 10.1.2.0 255.255.255.0 10.1.0.4 10.1.0.2 Bottom F3=Exit F5=Refresh F6=Print list F11=Display type of service F12=Cancel F17=Top F18=Bottom 428 AS/400 TCP/IP DNS and DHCP Support incorrect, or if a BOOTP/DHCP Relay Agent is involved, this might also be configured incorrectly or be in need of starting. Verify: Check the DHCP server log (as described in Section 17.2.2, “Reading the DHCP Log” on page 408) and verify that the DHCP discover message is being heard. Also verify that an offer is being sent to the IBM Network Station. Check the IP address to which the offer is being sent. The offer should be sent to the IP address of the client. If the offer being made has the IP address of the workstation controller and if it looks as though the offer is, in fact, being sent to the workstation controller’s address, this is incorrect. Solution: Somehow the lease data in the DHCP server might have become corrupted. The DHCP server must never give out the workstation controller’s IP address to a client. In this situation, clearing out the existing twinax leases solves the problem. You can accomplish this easily through Operations Navigator. To reset the lease information for any subnet that is configured in the DHCP server, perform the following steps: 1. In the DHCP configuration window from the left-most window, right-click the twinax subnet to open a context menu and select Disable. 2. Click OK on the informational pop-up window that reads The Subnet will not be disabled until the DHCP server is updated. 3. On the tool bar, click the following Update server icon: 4. Right-click the twinax subnet again to open another context menu and select Enable. 5. Click OK on the informational pop-up window that reads The Subnet will not be disabled until the DHCP server is updated. 6. On the tool bar, click the Update server icon previously shown. The whole range of addresses in the twinax subnet is now free. 17.5 DHCP Server Performance Considerations The following factors negatively affect DHCP processing server run-time performance: • User exit programs. The magnitude of degradation increases for each exit program registered. • pingTime configuration parameter. The higher the value is, the worse the overall response time per request. • leaseExpireInterval. This configuration parameter can negatively impact performance if set extremely low (one minute or lower). DHCP Problem Determination 429 • Logging. The number of logItem value enabled and type. TRACE is the most verbose. • Startup or restart time is proportional to the number of items configured to be managed, the size of the items stored in the non-volatile storage, and how drastic the changes in the configuration are from what was stored in non-volatile storage. The following factors negatively affect BOOTP/DHCP Relay Agent run-time performance: • Transmission delay configuration parameter. • Logging. The number of logItem value enabled and type. TRACE is the most verbose. 430 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 431 Appendix A. Mail Concepts This appendix intends to summarize some concepts and functions of the AS/400 mail implementation that you need to understand to follow the examples in Chapter 6, “Split DNS: Hiding Your Internal DNS Behind a Firewall” on page 125. If you are already familiar with the mail implementation on the AS/400 system, please skip this appendix. A.1 Basic Mail Configuration The basic configuration that you need to perform to deliver mail from / to POP3 clients follows: 1. Configure the AS/400 SMTP server. To do that, use the following the steps: • Configure the host name and domain name using the Change TCP Domain (CHGTCPDMN) command or CFGTCP option 12: Figure 376. Configuring Host and Domain Names • Verify that there is an IP address associated with the host name for the system either in the DNS server configuration or local host table. Add an A record in the DNS server configuration for the SMTP mail server host: DNS as1.mycompany.com IN A 10.5.69.222 If you are not using a DNS server, use the Add TCP Host Table Entry (ADDTCPHTE) command or CFGTCP option 10 to add the host’s IP address to the host table. The host table entry should look similar to this: Internet Host Address Name 10.5.69.222 AS1.MYCOMPANY.COM 2. Add an entry in the system distribution directory for the user. The following displays show only the relevant parameters. Change TCP/IP Domain (CHGTCPDMN) Type choices, press Enter. Host name . . . . . . . . . . . 'as1' Domain name . . . . . . . . . . 'mycompany.COM' Host name search priority . . . *REMOTE *REMOTE, *LOCAL, *SAME Internet address . . . . . . . '10.5.69.222' 432 AS/400 TCP/IP DNS and DHCP Support Figure 377. Directory Entry for Pop User - General Information To get to the next display, page down four times. Figure 378. Mail Service Level = System Message Storage - Preferred Address = SMTP Name Press F19 to configure the SMTP name for the user. Change Directory Entry User ID/Address . . . . : USER1 AS1 Type changes, press Enter. Description . . . . . . Pop user System name/Group . . . AS1 F4 for list User profile . . . . . USER1 F4 for list Network user ID . . . . USER1 AS1 More... Change Directory Entry User ID/Address . . . . : USER1 AS1 Type changes, press Enter. Mail service level . . 2 1=User index 2=System message store 4=Lotus Domino 9=Other mail service For choice 9=Other mail service: Field name . . . . F4 for list Preferred address . . . 3 1=User ID/Address 2=O/R name 3=SMTP name 9=Other preferred address Address type . . . . F4 for list For choice 9=Other preferred address: Field name . . . . F4 for list More... Mail Concepts 433 Figure 379. User’s SMTP Name 3. Start the mail servers: 1. Start the SMTP server STRTCPSVR SERVER(*SMTP) 2. Start the POP3 server: STRTCPSVR SERVER(*POP) 3. Start the Mail Server Framework: STRMSF A.2 Mail Forwarding Assume user1@as1.mycompany.com moves to user1@research.mycompany.com. We want to have all the SMTP/MIME mail sent to user1 at the old address automatically forwarded to the new address. Likewise, if your company’s internal network is connected to the Internet through a firewall, all the incoming mail is passed by the firewall to the system configured as the secure mail server. If there is more than one mail server in your internal network, you need a forwarding function in the secure mail server that forwards the piece of mail to the mail server where the To: user resides. Figure 380 on page 434 illustrates this concept. 1. Mail from the Internet is sent to user@mycompany.com. In our example, two pieces of mail arrive at mycompany.com’s firewall’s mail relay: one destined to userx@mycompany.com; the other one to user5@mycompany.com. Note: In this scenario, the internal and external domain names are the same: mycompany.com. 2. The firewall changes the domain name in the piece of mail to user@"secure_mail_server.private_domain_name". In our example, this is user5@as1.mycompany.com and userX@as1.mycompany.com. The mail relay in the firewall forwards all the inbound mail to the configured secure mail server (AS1 in our example). Change Name for SMTP System: AS1 User ID/Address . . . . . : USER1 AS1 Type choices, press Enter. SMTP user ID . . . . . . user1 SMTP domain . . . . . . . as1.mycompany.com SMTP route . . . . . . . 434 AS/400 TCP/IP DNS and DHCP Support 3. The forwarding function in AS1 (the mail hub) decides that user5 resides in internal mail server AS3 and that userX resides in internal mail server AS2 and forwards the mail to the corresponding mail server. Figure 380. Forwarding Mail From the Secure Mail Server to the Destination Internal Mail Server A.2.1 Implementing Mail Forwarding To implement the mail forwarding function, you need to perform two main configuration tasks at the mail hub (the system that receives the piece of mail and decides if it is for this mail server or must be forwarded): 1. Add two “user-defined” fields to the system distribution directory. 2. Add an entry in the system distribution directory for every single user in the entire network protected by the firewall. This is how the AS/400 mail hub (secure mail server) knows what real SMTP address to use to forward the mail for the user. A.2.1.1 Adding User-Defined Fields to System Distribution Directory Create two user-defined fields in the system distribution directory using the Change System Directory Attributes (CHGSYSDIRA) command. 1. Enter the CHGSYSDIRA command and press F4. Internal Mail Server AS2 Internal Mail Server AS3 Secure Mail Server AS1 Firewall Mail Relay mycompany.com Internet 1 user5@mycompany.com user5@AS1.mycompany.com 2 3 user5@AS3.mycompany.com FORWARDING FORWARDING userX@AS2.mycompany.com userX@AS1.mycompany.com userX@mycompany.com To perform the mail forwarding function through user-defined fields, the following fixes are required: V3R2: 5763-SS1 PTF SF43715 and 5763-TC1 PTF SF43699 V3R7: 5716-SS1 PTF SF43803 and 5716-TC1 PTF SF43799 Note Mail Concepts 435 2. Page down until the User-defined field parameters are displayed. 3. Fill in the information as shown in Figure 381. Figure 381. Adding User-Defined Fields to the System Distribution Directory A.2.1.2 Adding Directory Entries to Perform the Forwarding Function For each user in your internal network, you must add an entry in the system distribution directory at the mail hub (secure mail server or old mail server if you are implementing the function to redirect mail). 1. From an AS/400 command entry display, enter the command: WRKDIRE Press Enter. 2. Select option 1, Add. 3. Enter the following information. Notice that IUSER5 and INTERNET are values that we chose arbitrarily; they do not match any other configuration value. 4. Page down until the display in Figure 382 is shown. Fill in the information as indicated in Figure 382. Change System Dir Attributes (CHGSYSDIRA) Type choices, press Enter. User-defined fields: Field name . . . . . . . . . . FORWARDING Character value, *SAME Product ID . . . . . . . . . . *NONE Character value, *NONE Function . . . . . . . . . . . > *ADD *ADD, *RMV, *CHG, *KEEP Field type . . . . . . . . . . *ADDRESS *DATA, *MSFSRVLVL, *ADDRESS Maximum field length . . . . . 256 1-512 Field name . . . . . . . . . . FWDSRVLVL Character value Product ID . . . . . . . . . . *NONE Character value, *NONE Function . . . . . . . . . . . > *ADD *ADD, *RMV, *CHG, *KEEP Field type . . . . . . . . . . *MSFSRVLVL *DATA, *MSFSRVLVL, *ADDRESS Maximum field length . . . . . 001 1-512 More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display Add Directory Entry Type choices, press Enter. User ID/Address . . . . IUSER5 AS1 Description . . . . . . Forward Mail to user5@as3.mycompany.com System name/Group . . . INTERNET F4 for list User profile . . . . . F4 for list Network user ID . . . . 436 AS/400 TCP/IP DNS and DHCP Support Figure 382. Adding Directory Entry to Forward SMTP/MIME Mail Note: Address type MIME is equivalent to ATMIME. If the ATMIME option does not show in the F4 list on your system, select MIME. 5. Press F19 to enter the SMTP user ID and SMTP domain in the incoming mail to the mail hub. This must match the user ID and domain in the piece of mail relayed by the firewall to the secure mail server (step 2 in Figure 380 on page 434.) Figure 383. Specify SMTP User ID and SMTP Domain as Received by the Mail Hub Press Enter. 6. Press F20 to specify the forwarding information as shown in Figure 384. Add Directory Entry Type choices, press Enter. Mail service level . . 9 1=User index 2=System message store 4=Lotus Domino 9=Other mail service For choice 9=Other mail service: Field name . . . . FWDSRVLVL F4 for list Preferred address . . . 9 1=User ID/Address 2=O/R name 3=SMTP name 9=Other preferred address Address type . . . . ATMIME F4 for list For choice 9=Other preferred address: Field name . . . . FORWARDING F4 for list Specify User-Defined Fields Type choices, press Enter. SMTPAUSRID SMTP user5 SMTPDMN SMTP as1.mycompany.com Mail Concepts 437 Figure 384. Specifying Mail Forwarding Information Press Enter to add the directory entry to the system distribution directory. Figure 385 shows the relationship between parameters in the directory entry at the mail hub (AS1) and the directory entry for the user at the real mail server (AS3). Figure 385. Relationship Between Directory Entries in Mail Hub and User’s Mail Server A.3 Processing Inbound Mail Now that we have discussed the configuration needed to process inbound mail on an AS/400 SMTP server, let’s put everything together. Figure 386 shows a high level overview of how the AS/400 SMTP server processes inbound SMTP/MIME mail. Notice that in all our examples, we are always assuming that the recipient is a POP user. Specify User-Defined Fields Type choices, press Enter. FORWARDING user5@as3.mycompany.com FWDSRVLVL System Directory Entry IUSER5 AS1 User ID/Address . . . . : IUSER1 AS1 Description . . . . . . : Internet user for USER5 System name/Group . . . : INTERNET User profile . . . . . : Network user ID . . . . : IUSER1 AS1 Mail service level . . : FWDSRVLVL Preferred address . . . : FORWARDING Address type . . . . : ATMIME (Press F19) SMTPAUSRID SMTP : user5 SMTPDMN SMTP : AS1.MYCOMPANY.COM (Press F20) FORWARDING : user5@as3.mycompany.com FWDSRVLVL : System Directory Entry USER5 AS3 User ID/Address . . . . : USER5 AS3 Description . . . . . . : Local User - USER5 System name/Group . . . : AS3 User profile . . . . . : USER5 Network user ID . . . . : USER5 AS3 Mail service level . . : System message store Preferred address . . . : SMTP name Address type . . . . : (Press F19) SMTPAUSRID SMTP : user5 SMTPDMN SMTP : AS3.MYCOMPANY.COM (Press F20) FORWARDING : FWDSRVLVL : 438 AS/400 TCP/IP DNS and DHCP Support Figure 386. Processing Inbound Mail in an AS/400 SMTP Server A.4 Processing Outbound Mail The way an AS/400 SMTP server processes outbound mail varies slightly depending on the firewall configuration in the SMTP attributes. Figure 387 shows the high level overview of how outbound mail is processed by an AS/400 SMTP server when no firewall is installed on the system. host.company.com = Host+Domain in CFGTCP op 12 user@host.company.com host.company.com = Alias in Host Table or CNAME in DNS for local IP interface User in SDD MSGSRVLVL + PRFADDR Go to A.4 "Processing Outbound Mail" YES NO YES NO YES A FWDSRVLVL MIME/ATMIME 9 9 Forward mail to desired address 2=System Message Store + 3= SMTP Put mail on AS/400 POP mailbox /QTCPTMM/MAIL/user/JWxx.not AS/400 POP Server gets mail, deletes mail,.. POP protocol POP Client Nondeliverable note to Mail Concepts 439 Figure 387. Processing Outbound Mail in an AS/400 SMTP Server - CHGSMTPA Firewall(*NO) If you have a firewall installed in your AS/400 system, you must specify Firewall(*YES) in the Change SMTP Attributes (CHGSMTPA) command. CHGSMTPA MAILROUTER(FIREWALL.MYCOMPANY.COM) FIREWALL(*YES) Outbound mail is processed as shown in Figure 388. host.mycompany.com = Domain in CFGTCP op 12 user@host.mycompany.com mycompany.comn = Alias in Host Table or CNAME in DNS for local IP interface MX query host.mycompany.com FOUND? Check PTY of MX entries A query host.mycompany.com FOUND? Send mail to mail exchanger FAIL Yes No No No No Yes Yes CHGSMTPA FIREWALL *NO Local Yes Yes Go To A.3 "Processing Inbound Mail" A A query for Mail Exchanger Host 440 AS/400 TCP/IP DNS and DHCP Support Figure 388. Processing Outbound Mail in an AS/400 SMTP Server - CHGSMTPA Firewall(*YES) host.mycompany.com = Domain in CFGTCP op 12 mycompany.com = Domain in CFGTCP op 12 Go to Outbound Mail, no Firewall CHGSMTPA FIREWALL *YES user@host.mycompany.com No No Send mail to Firewall Yes Yes © Copyright IBM Corp. 1998 441 Appendix B. Special Notices This publication is intended to help AS/400 system and network administrators to install, configure, tailor, and troubleshoot the DNS and DHCP support available in OS/400 V4R2. The information in this publication is not intended as the specification of any programming interfaces that are provided by IBM Operating System/400. See the PUBLICATIONS section of the IBM Programming Announcement for OS/400 V4R2 for more information about what publications are considered to be product documentation. References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594 USA. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The information about non-IBM ("vendor") products in this manual has been supplied by the vendor and IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. The following document contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples contain the names of individuals, companies, brands, and products. All of these 442 AS/400 TCP/IP DNS and DHCP Support names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Reference to PTF numbers that have not been released through the normal distribution process does not imply general availability. The purpose of including these reference numbers is to alert IBM customers to specific information relative to the implementation of the PTF when it becomes available to each customer according to the normal IBM PTF distribution process. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: The following terms are trademarks of other companies: C-bus is a trademark of Corollary, Inc. Java and HotJava are trademarks of Sun Microsystems, Incorporated. Microsoft, Windows, Windows NT, and the Windows 95 logo are trademarks or registered trademarks of Microsoft Corporation. PC Direct is a trademark of Ziff Communications Company and is used by IBM Corporation under license. Pentium, MMX, ProShare, LANDesk, and ActionMedia are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited. Other company, product, and service names may be trademarks or service marks of others. IBM  AS/400 OS/400 Client Access Client Access/400 IBM Firewall for AS/400 400 OS/2 © Copyright IBM Corp. 1998 443 Appendix C. Related Publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. C.1 International Technical Support Organization Publications For information on ordering these ITSO publications see “How To Get ITSO Redbooks” on page 445. • AS/400 Internet Security: IBM Firewall for AS/400 , SG24-2162 (available at a later date) • TCP/IP Tutorial and Technical Overview, GG24-3376-04 • The Basics of IP Network Design, SG24-2580 C.2 Redbooks on CD-ROMs Redbooks are also available on CD-ROMs. Order a subscription and receive updates 2-4 times a year at significant savings. C.3 Other Publications These publications are also relevant as further information sources: • DNS and BIND by Albitz & Liu • Internetworking with TCP/IP by Douglas Comer • TCP/IP Addressing by Buck Graham • TCP/IP Configuration and Reference, SC41-5420-01 • IBM Network Station Manager Installation and Use, SC41-0664 (available at a later date) • System API Programming, SC41-5800 • IBM Firewall for AS/400, SC41-5424-00 C.4 Web Resources These Web sites are also relevant as further information sources: • www.redbooks.ibm.com and select Additional Redbook Materials CD-ROM Title Subscription Number Collection Kit Number System/390 Redbooks Collection SBOF-7201 SK2T-2177 Networking and Systems Management Redbooks Collection SBOF-7370 SK2T-6022 Transaction Processing and Data Management Redbook SBOF-7240 SK2T-8038 AS/400 Redbooks Collection SBOF-7270 SK2T-2849 RS/6000 Redbooks Collection (HTML, BkMgr) SBOF-7230 SK2T-8040 RS/6000 Redbooks Collection (PostScript) SBOF-7205 SK2T-8041 Application Development Redbooks Collection SBOF-7290 SK2T-8037 Personal Systems Redbooks Collection SBOF-7250 SK2T-8042 444 AS/400 TCP/IP DNS and DHCP Support • www.as400.ibm.com/firewall. • Use a search engine to find the following RFCs: Table 38. DNS RFC Information Table 39. DHCP RFC Information RFC number RFC Title RFC920 Domain Requirements RFC974 Mail Routing and Domain System RFC1032 Domain Administrator’s Guide RFC1033 Domain Administrator’s Operations Guide RFC1034 Domain Names: Concepts and Facilities RFC1035 Domain Names: Implementation and Specification RFC1101 DNS Encoding of Network Names and Other Types RFC1183 New DNS RR Definitions RFC1535 Security Problems in DNS Software RFC1537 Common DNS Data File Configuration File Errors RFC1713 Tools for DNS Debugging RFC1912 Common DNS Operational and Configuration Errors RFC1982 Serial Number Arithmetic RFC number RFC Title RFC2131 Dynamic Host Configuration Protocol RFC2132 DHCP Options and BOOTP Vendor Extensions RFC951 Bootstrap Protocol RFC1542 Clarifications and Extensions to the Bootstrap Protocol RFC1027 Using ARP to Implement Transparent Subnet Gateways RFC826 An Ethernet Address Resolution Protocol © Copyright IBM Corp. 1998 445 How To Get ITSO Redbooks This section explains how both customers and IBM employees can find out about ITSO redbooks, CD-ROMs, workshops, and residencies. A form for ordering books and CD-ROMs is also provided. This information was current at the time of publication, but is continually subject to change. The latest information may be found at http://www.redbooks.ibm.com. How IBM Employees Can Get ITSO Redbooks Employees may request ITSO deliverables (redbooks, BookManager BOOKs, and CD-ROMs) and information about redbooks, workshops, and residencies in the following ways: • PUBORDER – to order hardcopies in United States • GOPHER link to the Internet – type GOPHER WTSCPOK.ITSO.IBM.COM • Tools disks To get LIST3820s of redbooks, type one of the following commands: TOOLS SENDTO EHONE4 TOOLS2 REDPRINT GET SG24xxxx PACKAGE TOOLS SENDTO CANVM2 TOOLS REDPRINT GET SG24xxxx PACKAGE (Canadian users only) To get lists of redbooks: TOOLS SENDTO USDIST MKTTOOLS MKTTOOLS GET ITSOCAT TXT To register for information on workshops, residencies, and redbooks: TOOLS SENDTO WTSCPOK TOOLS ZDISK GET ITSOREGI 1996 For a list of product area specialists in the ITSO: TOOLS SENDTO WTSCPOK TOOLS ZDISK GET ORGCARD PACKAGE • Redbooks Web Site on the World Wide Web http://w3.itso.ibm.com/redbooks • IBM Direct Publications Catalog on the World Wide Web http://www.elink.ibmlink.ibm.com/pbl/pbl IBM employees may obtain LIST3820s of redbooks from this page. • REDBOOKS category on INEWS • Online – send orders to: USIB6FPL at IBMMAIL or DKIBMBSH at IBMMAIL • Internet Listserver With an Internet E-mail address, anyone can subscribe to an IBM Announcement Listserver. To initiate the service, send an E-mail note to announce@webster.ibmlink.ibm.com with the keyword subscribe in the body of the note (leave the subject line blank). A category form and detailed instructions will be sent to you. For information so current it is still in the process of being written, look at "Redpieces" on the Redbooks Web Site (http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. Redpieces 446 AS/400 TCP/IP DNS and DHCP Support How Customers Can Get ITSO Redbooks Customers may request ITSO deliverables (redbooks, BookManager BOOKs, and CD-ROMs) and information about redbooks, workshops, and residencies in the following ways: • Online Orders (Do not send credit card information over the Internet) – send orders to: • Telephone orders • Mail Orders – send orders to: • Fax – send orders to: • 1-800-IBM-4FAX (United States) or (+1) 408 256 5422 (Outside USA) – ask for: Index # 4421 Abstracts of new redbooks Index # 4422 IBM redbooks Index # 4420 Redbooks for last six months • Direct Services – send note to softwareshop@vnet.ibm.com • On the World Wide Web • Internet Listserver With an Internet E-mail address, anyone can subscribe to an IBM Announcement Listserver. To initiate the service, send an E-mail note to announce@webster.ibmlink.ibm.com with the keyword subscribe in the body of the note (leave the subject line blank). In United States In Canada Outside North America IBMMAIL usib6fpl at ibmmail caibmbkz at ibmmail dkibmbsh at ibmmail Internet usib6fpl@ibmmail.com lmannix@vnet.ibm.com bookshop@dk.ibm.com United States (toll free) Canada (toll free) 1-800-879-2755 1-800-IBM-4YOU Outside North America (+45) 4810-1320 - Danish (+45) 4810-1420 - Dutch (+45) 4810-1540 - English (+45) 4810-1670 - Finnish (+45) 4810-1220 - French (long distance charges apply) (+45) 4810-1020 - German (+45) 4810-1620 - Italian (+45) 4810-1270 - Norwegian (+45) 4810-1120 - Spanish (+45) 4810-1170 - Swedish IBM Publications Publications Customer Support P.O. Box 29570 Raleigh, NC 27626-0570 USA IBM Publications 144-4th Avenue, S.W. Calgary, Alberta T2P 3N5 Canada IBM Direct Services Sortemosevej 21 DK-3450 Allerød Denmark United States (toll free) Canada Outside North America 1-800-445-9269 1-800-267-4455 (+45) 48 14 2207 (long distance charge) Redbooks Web Site IBM Direct Publications Catalog http://www.redbooks.ibm.com http://www.elink.ibmlink.ibm.com/pbl/pbl For information so current it is still in the process of being written, look at "Redpieces" on the Redbooks Web Site (http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. Redpieces 447 IBM Redbook Order Form Please send me the following: We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment. Title Order Number Quantity First name Last name Company Address City Postal code Telephone number Telefax number VAT number Invoice to customer number Country Credit card number Credit card expiration date Card issued to Signature 448 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 449 Index Symbols "green screen" 31 *ALLOBJ special authority 31 .DB extension file 186 .db file 19 Numerics 0.0.127.in-addr.arpa reverse mapping file 44 69.5.10.in-addr.arpa secondary domain file 58 A A record 41 absolute domain name 84 Add Directory Entry (ADDDIRE) command 45, 131 ADDDIRE command 45 adding additional subnet of 10.1.1.0 83 new host 92 subdomain 90 address file loopback 19 address pool 221 changing the DHCP server configuration 378 dividing across DHCP servers 263 enlarging 272 reducing 264 address record 41 Address Resolution Protocol (ARP) 353 address sorting 15 addressing scheme 317, 362 advantage of keeping centralized control 87 alias using 202 AS/400 communication trace reading and decoding 421 starting 419 stopping 420 AS/400 job log 188 ATTRIBUTES file 22, 33 authoritative 6 authoritative answer 114 authoritative name server 10, 11, 86, 184 authoritative server 109 authority maintaining 92 autostart 205 AUTOSTART attribute 23 B backing up firewall DNS server 182 parent server 115 primary domain file 184 basic IP over twinax 343 bibliography 443 BOOT file 18, 19, 44, 79 Boot file 13 BOOTP migrating to a new DHCP configuration 401, 403 migrating to an existing DHCP configuration 405 overview 217 BOOTP/DHCP Relay Agent 219, 224, 313 configuring 318, 336, 376, 388, 391 configuring for Win NT 338 starting 340, 376 browser proxy 143 C cache file 13, 44, 87 cached authoritative response 202 caching-only name server 10 CC field 167 list 167 Problem 168 CCSID (coded character set ID) 31 CFGTCP command 51 Change DNS Attributes (CHGDNSA) command 23 Change SMTP Attributes (CHGSMTPA) command 50 Change System Directory Attributes (CHGSYSDIRA) command 157, 434 Change TCP Domain (CHGTCPDMN) command 431 changing domain name 30 checking mail queue 205 CHGDNSA command 23 CHGSMTPA command 50, 118 child DNS name server 89 child server 99 configuring 106 CNAME record 14 coded character set ID (CCSID) 31 command Add Directory Entry (ADDDIRE) 131 ADDDIRE 45 CFGTCP 51 Change System Directory Attributes (CHGSYSDIRA) 157, 434 Change TCP Domain (CHGTCPDMN) 431 CHGDNSA 23 CHGSMTPA 50, 118 Configure TCP/IP (CFGTCP) 110, 129, 204 ENDTCPSVR SERVER(*DNS) 23 SAVLICPGM 24 Start Host Server (STRHOSTSVR) 23 STRTCP 23 STRTCPSVR 23 STRTCPSVR SERVER(*DNS 23 Work with Directory Entry (WRKDIRE) 131 Work with Spooled File (WRKSPLF) 18 WRKLNK 34 450 AS/400 TCP/IP DNS and DHCP Support common mistake 207 communication trace reading and decoding 421 starting 419 stopping 420 Complete the Firewall Installation page 133 concept zone of authority 85 configuration wizard 36, 76 Configure TCP/IP (CFGTCP) command 51, 110, 129, 204 configuring adding a subnet to a DHCP server configuration 302 backup DHCP server 332 BOOTP/DHCP Relay Agent 336, 388, 391 BOOTP/DHCP Relay Agent for Win NT 338 child server 106 DHCP clients 296 DHCP on IBM Network Station 251 DHCP on Win 95 clients 249 DHCP server 243, 326 DHCP server support 291 domain mail server 48 firewall 134 forwarder 140 forwarders 78 IBM Network Stations with DHCP 343, 359, 366, 369, 373 local BOOTP/DHCP Relay Agent 376 local DHCP configuration file 376 mail exchanger 164 mail server 44 POP3 client 28, 47 POP3 user 28, 45 primary name server 93 root name server 179 root server 107, 174 routes on DHCP servers 334 secondary DNS server 79 secondary name server 81, 95 TCP/IP interface 240, 280, 290 transparent subnetting 383 twinax 343, 359, 366, 373 twinax subnet address pool 380 creating A record 99 DNS primary name server 28 primary domain 179 primary name server 29 reverse mapping entry 41 secondary domain 183 secondary domain server 57 user-defined field 434 Creating new zone 81 D data DHCPACK 417 DHCPDISCOVER 413 DHCPOFFER 415 DHCPREQUEST 417 debug level 21, 23 debug method 188 debug problem 185 debugging mail 202 mail delivery problems 189 default administrator’s e-mail address 185 default cache time 62 default domain name 177 default Internet root name server list 180 default secondary server refresh interval 95 default TTL (time to live) value 186 defining zone of authority 91 delegate reverse mapping file 102 delegating authority 85, 88 subdomain 101 the workload 89 delegation 5 deleting reverse mapping entry 41 deleting primary name server configuration 80 deliver mail 116, 431 DHCP acquiring configuration information 220 BOOTP/DHCP Relay Agent 219, 224, 313 clients connected to multiple LANs 277, 313, 316 concepts 217 configuring a BOOTP/DHCP Relay Agent 336 configuring a BOOTP/DHCP Relay Agent for Win NT 338 configuring clients 296 configuring IBM Network Stations 343, 359, 366, 373 configuring local configuration file 376 configuring on IBM Network Station 251 configuring on Win 95 clients 249 configuring the BOOTP Relay Agent 376, 388, 391 configuring twinax aubnet address pool 380 full client support 271 host clients 218 implementing changes 224 log 297, 413, 415, 417 logging utility 407, 408 migrating BOOTP to a new DHCP configuration 401, 403 migrating BOOTP to an existing DHCP configuration 405 multiple servers 261 network components 218 overview 217, 218 problem determination 407 Program Temporary Fixes (PTFs) 407 renewing leases 223 server 219 simple network scenario 237 starting a BOOTP/DHCP Relay Agent 340 starting server support 295 starting servers 270, 274 451 symptoms, problems, and resolutions 423 two-server scenario 261 DHCP server 313 adding a subnet to an existing configuration 302 adding IP addresses to backup 266, 273 changing configuration of an address pool 378 changing the lease time 269 changing the number of options 266 configuring 318, 326 configuring a backup server 332 configuring information 241 configuring routing information 334 configuring support 291 configuring through Operations Navigator 243 dividing an address pool 263 enlarging the address pool 272 minimizing failures 261 multiple servers 261 multiple subnets 277 reducing the primary address pool 264 remote 373 starting 270, 274, 340 starting support 295 DHCPACK data 417 DHCPDISCOVER data 413 DHCPOFFER data 415 DHCPREQUEST data 417 diagnostic tools 185 disadvantage of keeping centralize control 88 distributed database 3 DNS administrator 177 DNS configuration 90, 117 file 18 graphical interface 40 verifying 161 Windows 95 client 142 DNS Configuration Wizard 23 DNS configuration wizard 44, 106, 177 DNS directory 33 DNS filter 137 DNS job log 53 DNS name space 4 DNS server backup 24 cache 185 configuration 22 configuration wizard 36 firewall 181 implementing primary 25 implementing secondary 25 job 18 recovery 24 starting 52 statistics information 187 user interface 22 DNS support installing 17 DNS0417 message 32 domain 5 domain file primary 12 secondary 12 domain mapping file 18 domain name system 3, 5 concepts 3 dump file 194 DUMPDB file 20, 201 dumping server statistics 194 E -e option 32 ENDTCPSVR SERVER(*DNS) command 23 error message DNS00E9 209 example statistics dump 194 expire interval 61 expire timer 187 external name server 128 F file cache 13 local 13 firewall 11 configuration 134 DNS 125, 173 DNS server 181 installation 133 mail relay 137 name server 125, 128 network server description 138, 205 parameter 50, 118 problem determination 206 forward mapping 41 forward mapping file 12, 18 forward mapping secondary domain file 57 forward resolution file 161 forwarder 11 configuring 140 forwarders configuration 73 verifying 162 forwarding function 153, 433 full domain name 3 full-DHCP client support 271 G grow the network 83 H hardware problem 210 hierarchical partitioning 238, 271 history log 210 host client multihomed 278 host clients DHCP 218 host domain name 452 AS/400 TCP/IP DNS and DHCP Support updating 110 host name 29 host name search priority 66 I IBM Network Station configuring 369 configuring DHCP 251 configuring with DHCP 343, 359, 366, 373, 376 powering on 376 starting 351, 369, 387, 391 startup sequence 371 stopping 387, 391 testing connectivity 373 using transparent subnetting 383 IBM Network Stations NVRAM 370 IFS directory 180 IFS directory file 120 implementing DNS server 25 mail forwarding 156 mail forwarding function 434 Import Domain Data 40, 71 importing domain data 76 inbound SMTP/MIME mail processing 437 Incoming Mail Server 203 increase debug level 197 individual resource record 187 installing DNS support 17 firewall 133 Integrated PC Server 144 LAN connections 144 interface configuring 240, 280, 290 internal DNS 125 server configuration 148 internal domain name server 125 Internal name server name server internal 128 internal root 11, 96 Internet domain name space 176 root name server 176 root server 4 service provider (ISP) DNS server 173 InterNIC registration 177 IP address 4, 29, 184 adding to backup DHCP server 273 adding to backup DHCP servers 266 IP interface verifying 203 ISP DNS IP address 136 ISP DNS server 128 iterative query 8 K keeping centralized control advantage 87 disadvantage 88 L LAN adapter 129 lease changing lease time on DHCP servers 269 renewing 223 Load Defaults box 180 local file 13 local host alias 169 table 51 localhost 38 localhost host 178, 186 log DHCP 297, 407, 408 DHCPACK data 417 DHCPDISCOVER data 413 DHCPOFFER data 415 DHCPREQUEST data 417 Loopback address file 19 M MAC address 217, 399 mail 117 configuration 117 debugging 202 delivery 431 hub 435 implementation 431 router parameter 50 routing 4 server framework job 52 service level 46 mail exchanger configuring 164 mail forwarding implementing 156 mail forwarding function 156 implementing 434 mail queue checking 205 mail relay firewall 137 mail server 117 Mail Service Level parameter 203 maintaining authority 92 manually configure forwarders 78 mapping file domain 18 forward 12, 18 reverse 13, 19 master name server 9 master server 86 message DNS0417 32 453 migrate host name table entry 24 migrating AS1 host table 28 DNS formatted file 28 migrating BOOTP to DHCP 399 to a new DHCP configuration 401, 403 to an existing DHCP configuration 405 migrating from DNS server 71 multihomed host 278 MX query 50, 55, 166 MX record 14, 32, 49 MX record query 55 mycompany.com.db forward mapping file 79 N name resolution 7 name server 5, 7 authoritative 10 caching-only 10 external 128 firewall 128 forwarder 11 lookup (nslookup) program 23 master 9 parent and child 10 primary 9 root 10 secondary 9 statistics 194 Netscape browser mail preference 143 network addressing 238, 271 network configuration 27 network server description firewall 138 new host adding 92 non-authoritative answer 114 non-zero global number 196 NS record 14, 100, 115 NS resource record 58 NSLOOKUP 53 nslookup 111, 203 interactive tool 188 program 23, 190 query 192 NVRAM 370 O Operations Navigator configuring DHCP server 243 DNS configuration 28, 177, 186 DNS configuration import domain function 36 options changing on DHCP servers 266 outbound mail processing 438 P parameter firewall 50 mail router 50 Mail Service Level 203 Preferred address 203 search first 202 parent and child name server 10 parent server 96 partitioning 238, 271 PID file 22 ping 188, 203 planning secondary name server 177 zone of authority 176 planning phase 29 POP mailbox 206 POP3 directory entry 45, 117, 202 postmaster 185 POP3 server 143 POP3 system directory entry 185 POP3 user configuring 45 postmaster POP3 directory entry 185 preferred address 46 Preferred address parameter 203 preventing problems 185 primary DNS server 71 primary domain creating 179 primary domain file 12, 86 primary name server 8, 9, 86 configuring 93 primary name server configuration deleting 80 probable error causes 207 problem determination communication trace 419 DHCP 407 DHCP symptoms, problems, and resolutions 423 Program Temporary Fixes (PTFs) for DHCP 407 problem symptom 207 processing inbound SMTP/MIME mail 437 outbound mail 438 Program Temporary Fixes (PTFs) 407 Proxy ARP 343, 345, 353, 355 proxy server 137 PTR record 14 Q QMSF job 205 QSYSWRK subsystem 18, 118, 189, 204 QTCP user profile 190 QTOBDNS job 189, 203 QTOBDNS job log 53, 60, 61, 188, 190, 210 QTOBDNS server job 18 QTOBH2N migation program 31 454 AS/400 TCP/IP DNS and DHCP Support QTOBH2N program 32, 36 QTOBXFER job 59, 190 QTOBXFER job log 210 QTOBXFER secondary server zone transfer job 18 QTOBXMI transfer job 18 QTOBXMIT job 190 query DNS server 66 iterative 8 recursive 8 reverse look up 42 type 114 QUERYLOG file 19, 166, 188, 199 QUERYLOG file example 200 R reconfigure client 66 record CNAME 14 file 14 MX 14 NS 14 SOA 14 recursive query 8 refresh timer 187 regular backup plan 24 Relay Agent 219, 224, 313 configuring 318, 336, 376, 388, 391 configuring for Win NT 338 starting 340, 376 remote DHCP server 373 twinax subnet address pool 380 remote.com domain 30 resolver 7 retry interval 61 retry timer 187 reverse look up query 42 reverse mapping file 13, 19 primary domain file 95 reverse mapping entry creating 41 deleting 41 Review Configuration page 135 RFC 1537 186 RFC 1912 186 root name server 10 configuring 179 root server 37 configuring 107, 174 internal 96 ROOT.FILE list 180 round robin 15 round robin function 187 route configuring on a DHCP server 334 RUNDBG file 21 RUNDEBUG file 22, 198 S SAVLICPGM command 24 scenarios DHCP clients connected to multiple LANs 277, 313, 316 network with two DHCP servers 261 simple DHCP network 237 scope planning 238, 271 search first parameter 202 secondary DNS server 162 configuring 79 secondary domain 58 back-up file 12 creating 183 file 86 secondary domain server creating 57 secondary name server 9, 86 configuring 81, 95 planning 177 secondary server expire interval 61 retry interval 61 secure mail server 156, 433 secure zone record 65 security consideration 63 zone transfer 63 server configuration 17, 71 DHCP 219 firewall DNS 125 implementation 17 server statistics 20 dumping 194 service file 19 SET TYPE=MX command 191 SET TYPE=PTR command 192 SMTP domain name 45, 117, 171 SMTP mail server 143 SMTP Outgoing Mail Server 203 SMTP server 119, 171 SMTP system alias table 202 SMTP system alias table entry 185 SOA cache time 62 SOA record 14, 61, 115, 185, 186, 187 SOA resource record 61 SOCKS configuration 143 software prerequisite 17 split DNS 12 Start Host Server (STRHOSTSVR) command 23 starting DNS Server 52 secondary name server 59 the IBM Network Station 351 statistics dump example 194 STATISTICS log file 20 STATS information 188 STRHOSTSVR command 23 STRMSF command 205 455 STRTCP command 23 STRTCPSVR command 23 STRTCPSVR SERVER(*DNS) command 23 STRTCPSVR SERVER(*DNS) RESTART(*DNS) command 186 subdomain 3, 5 adding 90 delegating 101 subnet adding a subnet to a DHCP server configuration 302 multiple subnets 313 multiple subnets and DHCP servers 277 subnetting transparent 383 system concepts domain name 3 system distribution directory entry 156, 169 T TCP/IP configuration 128, 145, 170 host table entries 146 interface 145 TCP/IP configuration value 155 terminology 128 TMP directory 33 traces 185 transparent subnetting 343, 345, 352, 383 twinax 356 troubleshooting DNS problems 185, 188 twinax configuration 343, 359, 366, 373 basic IP 343 local DHCP configuration file 376 remote DHCP server 380 transparent subnetting 356, 383 U UDP packet 138 unique domain name 4 unrelated domain 173 update server smart icon 185 updating host domain name 110 user@public_domain 168 user-defined field creating 434 using alias 202 V verify TCP/IP domain information 118 verifying DNS configuration 161 forwarders configuration 162 IP interface 203 mail-related configuration option 130 SMTP configuration 50 TCP/IP configuration 50 TCP/IP interface 129 W wildcard MX entry 117 wildcard MX record 48, 55 Windows 95 clients configuring DHCP 249 Windows NT configuring a BOOTP/DHCP Relay Agent 338 wizard configuration 36 window 37 Work with Directory Entry (WRKDIRE) command 131 Work with Spooled File (WRKSPLF) command 18 WRKACTJOB SBS(QSYSWRK) command 205 WRKCFGSTS *NWS command 205 WRKLNK command 34 WRKSPLF command 18 WRKSPLF QMSF command 205 X XFRNETS directive 64 Z zone of authority 5, 96, 116 concept 85 defining 91 planning 176 zone transfer 9, 59, 81, 86 frequency 60 security 63 456 AS/400 TCP/IP DNS and DHCP Support © Copyright IBM Corp. 1998 457 ITSO Redbook Evaluation AS/400 TCP/IP Autoconfiguration: DNS and DHCP Support SG24-5147-00 Your feedback is very important to help us maintain the quality of ITSO redbooks. Please complete this questionnaire and return it using one of the following methods: • Use the online evaluation form found at http://www.redbooks.com • Fax this form to: USA International Access Code + 1 914 432 8264 • Send your comments in an Internet note to redbook@vnet.ibm.com Please rate your overall satisfaction with this book using the scale: (1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor) Overall Satisfaction __________ Please answer the following questions: Was this redbook published in time for your needs? Yes___ No___ If no, please explain: What other redbooks would you like to see published? Comments/Suggestions: (THANK YOU FOR YOUR FEEDBACK!) Printed in U.S.A.

ibm.com/redbooks Backup Recovery and Media Services for OS/400 A Practical Approach Susan Powers Scott Buttel Amit Dave Rolf Hahn Derek McBryde Edelgard Schittko Tony Storry Gunnar Svensson Mervyn Venter Concepts and tasks to implement BRMS for OS/400 on AS/400e servers Tips and techniques to make your BRMS implementation run smoother Best practices for media and tape management International Technical Support Organization SG24-4840-01 Backup Recovery and Media Services for OS/400: A Practical Approach February 2001 © Copyright International Business Machines Corporation 1997, 2001. All rights reserved Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Second Edition (February 2001) This edition applies to Version 3, Release 2 of Backup Recovery Media Services for OS/400, 5769-BR1, for use with V4R5 of OS/400. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Before using this information and the product it supports, be sure to read the general information in Appendix I, “Special notices” on page 317. Take Note! iii Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii How this redbook is organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Chapter 1. Backup Recovery and Media Services/400 introduction . . . . . .1 1.1 Overview of BRMS/400 functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2 Policies and control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.3 Functional enhancements with BRMS/400 releases . . . . . . . . . . . . . . . . . .3 1.4 Scope of this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Chapter 2. Installation planning for BRMS/400 . . . . . . . . . . . . . . . . . . . . . . .7 2.1 Before you begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2.1.1 AS/400 systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 2.1.2 Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 2.1.3 Media naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 2.1.4 Storage locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 2.1.5 Tape drives and media types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 2.2 Installing BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 2.2.1 Updating BRMS/400 license information . . . . . . . . . . . . . . . . . . . . . .13 2.2.2 Initializing the BRMS/400 environment . . . . . . . . . . . . . . . . . . . . . . .14 2.3 BRMS/400 menus and commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Chapter 3. Implementing BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 3.1 Getting started with BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 3.2 The building blocks of BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 3.3 Storage locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 3.4 Media devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 3.5 Media library device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 3.6 Media classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 3.7 Container classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 3.8 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 3.9 Move policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 3.10 Media policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 3.11 BRMS/400 policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 3.11.1 System and backup policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 3.11.2 Libraries to omit from backups . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 3.12 Backup control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 3.12.1 Default backup control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38 3.12.2 Job queue processing from control group . . . . . . . . . . . . . . . . . . . .40 3.12.3 Subsystem processing from control groups . . . . . . . . . . . . . . . . . . .41 3.13 Enrolling and initializing media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43 3.13.1 Appending to media rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 3.13.2 Media security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 3.13.3 Extracting media information from non-BRMS saves . . . . . . . . . . . .45 3.14 Backing up using BRMS/400 control groups . . . . . . . . . . . . . . . . . . . . . .49 3.15 Reviewing BRMS/400 log and media status . . . . . . . . . . . . . . . . . . . . . .50 iv Backup Recovery and Media Services for OS/400 3.16 BRMS/400 reports and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.17 Current status of media and save activity . . . . . . . . . . . . . . . . . . . . . . . 53 3.18 Restoring data using BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Chapter 4. Managing BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1 BRMS/400 operational tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.1 Checking for media availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.2 Performing BRMS/400 backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1.3 Saving save files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.1.4 Performing daily checks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.1.5 Moving media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.6 Media management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.1.7 Daily housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.2 Setting up your own control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.2.1 Considerations for libraries that affect BRMS/400 . . . . . . . . . . . . . . 65 4.2.2 Control group to save QGPL, QUSRSYS, and QUSRBRM. . . . . . . . 65 4.2.3 User exits and control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2.4 Omitting libraries from a control group . . . . . . . . . . . . . . . . . . . . . . . 68 4.2.5 Control group to save QMLD and QUSRMLD . . . . . . . . . . . . . . . . . 68 4.2.6 Backup control group attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3 Save-while-active and BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3.1 Save-while-active implementation in BRMS/400 . . . . . . . . . . . . . . . 73 4.3.2 Save-while-active parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3.3 Using the MONSWABRM command . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3.4 Synchronizing blocks of libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.5 Examples of using save while active with BRMS/400 . . . . . . . . . . . . 78 4.4 Saving spooled files using BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.5 BRMS/400 console monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.5.1 Console monitor function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.5.2 Securing the console monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5.3 Monitoring the console monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5.4 Canceling the console monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.6 Job scheduling and BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6.1 Using the OS/400 job scheduler. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6.2 Submitting jobs to the OS/400 job scheduler . . . . . . . . . . . . . . . . . . 92 4.6.3 Working with scheduled jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6.4 Using BRMS/400 commands in job scheduler for OS/400 . . . . . . . . 93 4.6.5 Weekly activity and job scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . 95 Chapter 5. BRMS/400 networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.1 Overview of BRMS/400 network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 How shared media inventory synchronization works . . . . . . . . . . . . . . . . 98 5.3 Network communications for BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . 101 5.3.1 Network security considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.4 Adding systems to a network group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.4.1 Receiving media information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.5 Removing a system from the network group . . . . . . . . . . . . . . . . . . . . . 111 5.6 Changing the system name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.6.1 Changing the system name on V3R1 . . . . . . . . . . . . . . . . . . . . . . . 113 5.6.2 Changing the system name on V3R2, V3R6, or V3R7 . . . . . . . . . . 115 5.6.3 Other scenarios that involve a system name change . . . . . . . . . . . 116 5.7 Joining two BRMS/400 networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.8 Copying control groups between networked AS/400 systems . . . . . . . . 119 v 5.9 Verifying the BRMS/400 network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 Chapter 6. Saving and restoring the integrated file system . . . . . . . . . . .123 6.1 Overview of IFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123 6.2 Planning for saving IFS directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 6.2.1 Storage spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 6.2.2 LAN Server/400 structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125 6.2.3 Memory requirements for save and restore . . . . . . . . . . . . . . . . . . .127 6.2.4 Authority to save IFS directories . . . . . . . . . . . . . . . . . . . . . . . . . . .127 6.2.5 Restricted state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 6.2.6 Integrated PC Server on or off?. . . . . . . . . . . . . . . . . . . . . . . . . . . .134 6.3 Save and restore strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135 6.3.1 Performance impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135 6.3.2 Saving regularly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136 6.4 Saving IFS using BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 6.4.1 Setting up BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 6.4.2 Managing IFS saves with BRMS/400. . . . . . . . . . . . . . . . . . . . . . . .140 6.5 Restoring IFS directories with BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . .142 6.5.1 Restoring objects to /QLANSrv with BRMS/400. . . . . . . . . . . . . . . .142 6.5.2 Restoring a storage space with BRMS/400 . . . . . . . . . . . . . . . . . . .145 6.6 Saving and restoring V3R1 IFS data with BRMS/400 . . . . . . . . . . . . . . .146 6.6.1 Disaster recovery for LAN Server/400 environment with BRMS/400 147 6.7 Save and restore hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147 6.7.1 Save and restore options for LAN Server/400 . . . . . . . . . . . . . . . . .148 Chapter 7. AS/400 hardware support for automated tape libraries . . . . .151 7.1 3494 Automated Tape Library Data Server . . . . . . . . . . . . . . . . . . . . . . .151 7.1.1 3494 Automated Tape Library Data Server system attachment . . . .151 7.1.2 Connection considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151 7.1.3 3494 Automated Tape Library Data Server: Multiple systems . . . . .152 7.1.4 Alternate IPL support for the 3494. . . . . . . . . . . . . . . . . . . . . . . . . .153 7.2 9427 tape library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 7.2.1 Alternate IPL support for the 9427. . . . . . . . . . . . . . . . . . . . . . . . . .154 7.3 3590 with automated cartridge facility . . . . . . . . . . . . . . . . . . . . . . . . . . .154 7.3.1 Alternate IPL for the 3590 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154 7.4 3570 Magstar MP tape library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155 7.4.1 Managing cassettes and magazines for the 3570 . . . . . . . . . . . . . .155 7.4.2 Alternate IPL support for the 3570. . . . . . . . . . . . . . . . . . . . . . . . . .156 Chapter 8. AS/400 software support for automated tape libraries . . . . . .157 8.1 Software support for automated tape libraries . . . . . . . . . . . . . . . . . . . . .157 8.2 AS/400 with IMPI technology (CISC). . . . . . . . . . . . . . . . . . . . . . . . . . . .159 8.3 AS/400 with 64-Bit PowerPC technology (RISC) . . . . . . . . . . . . . . . . . . .159 8.4 Library Manager for the 3494 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160 8.4.1 Mounting a single volume from the 3494 . . . . . . . . . . . . . . . . . . . . .160 8.4.2 Demounting a single volume from the 3494. . . . . . . . . . . . . . . . . . .162 8.4.3 Mounting a cartridge from the convenience I/O station . . . . . . . . . .162 8.4.4 Resetting the stand-alone mode . . . . . . . . . . . . . . . . . . . . . . . . . . .164 Chapter 9. Implementing automated tape libraries . . . . . . . . . . . . . . . . . .165 9.1 Configuring the 3494 Automated Tape Library Data Server for CISC . . .165 9.2 Configuring other media library devices for CISC . . . . . . . . . . . . . . . . . .166 9.3 Configuring media library devices for RISC . . . . . . . . . . . . . . . . . . . . . . .167 9.3.1 Determining resource names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167 vi Backup Recovery and Media Services for OS/400 9.3.2 Creating media library device descriptions. . . . . . . . . . . . . . . . . . . 168 9.3.3 Creating a Robot Device Description (ROBOTDEV) for the 3494 . . 170 9.3.4 Changing media library device descriptions . . . . . . . . . . . . . . . . . . 172 9.3.5 Allocating resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 9.3.6 Managing multiple devices in a single 3494 . . . . . . . . . . . . . . . . . . 177 9.3.7 Selecting and varying on devices. . . . . . . . . . . . . . . . . . . . . . . . . . 179 9.4 Updating BRMS/400 device information. . . . . . . . . . . . . . . . . . . . . . . . . 181 9.4.1 Device location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 9.5 Managing cartridges in the media library device . . . . . . . . . . . . . . . . . . 183 9.5.1 Special cartridge identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 9.5.2 VOL(*MOUNTED) usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.5.3 End option (ENDOPT) setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.5.4 Importing cartridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 9.5.5 Exporting cartridges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 9.6 Restricted state automation for the 3494 . . . . . . . . . . . . . . . . . . . . . . . . 189 9.7 Using a tape resource as a stand-alone unit (RISC) . . . . . . . . . . . . . . . 190 Chapter 10. Recovery using BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.1 Overview of BRMS/400 recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.1.1 Synchronizing maintenance, movement, and recovery reports. . . 193 10.1.2 Recovery from a central point . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 10.2 Recovering an entire system (starting with lIcensed Internal Code) . . . 195 10.2.1 Preparation for the recovery process . . . . . . . . . . . . . . . . . . . . . . 195 10.2.2 Setting up the tape device for SAVSYS recovery . . . . . . . . . . . . . 197 10.2.3 Recovering the Licensed Internal Code and operating system . . . 197 10.2.4 Recovering BRMS/400 and system information . . . . . . . . . . . . . . 199 10.2.5 Completing the recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 10.3 Recovering specific objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.3.1 Recovering individual user profiles. . . . . . . . . . . . . . . . . . . . . . . . 207 10.4 Restoring the integrated file system. . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Chapter 11. Planning for upgrades to PowerPC AS. . . . . . . . . . . . . . . . . 209 11.1 Preparing BRMS/400 on your source system. . . . . . . . . . . . . . . . . . . . 209 11.2 BRMS considerations for saving user information . . . . . . . . . . . . . . . . 211 11.3 Preparing BRMS/400 on your target system . . . . . . . . . . . . . . . . . . . . 212 11.4 Re-synchronizing BRMS/400 after an upgrade . . . . . . . . . . . . . . . . . . 215 11.5 Deleting the libraries for the media library device driver. . . . . . . . . . . . 216 Chapter 12. Planning for the hierarchical storage management archiving solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 12.1 Archiving considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 12.1.1 How archiving is done by BRMS/400 . . . . . . . . . . . . . . . . . . . . . . 217 12.1.2 The BRMS/400 double save for archiving . . . . . . . . . . . . . . . . . . 221 12.2 Normal-aged file member archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.2.1 Database file members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.2.2 Source file members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 12.3 Application swapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 12.4 Logical files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 12.5 Duplicating your archive tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 12.5.1 Archive tape duplication process . . . . . . . . . . . . . . . . . . . . . . . . . 227 12.6 Re-archiving retrieved objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 12.7 Retrieval considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 12.7.1 How BRMS/400 does Dynamic Retrieval . . . . . . . . . . . . . . . . . . . 230 12.8 Retrieval methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 vii 12.9 Operations that invoke retrieval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233 12.10 Operations that do not invoke retrieval . . . . . . . . . . . . . . . . . . . . . . . .234 12.11 Applying journal changes to archived data files . . . . . . . . . . . . . . . . . .235 12.12 Member level changes to files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236 12.13 Retrieval performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237 12.13.1 Saving access paths when archiving . . . . . . . . . . . . . . . . . . . . . .237 12.13.2 File size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237 12.13.3 Multiple physical files behind a logical file . . . . . . . . . . . . . . . . . .237 12.13.4 Which retrieve mode to use for interactive applications . . . . . . . .238 12.13.5 Using the *VERIFY retrieve mode for batch jobs . . . . . . . . . . . . .239 12.14 Managing your disk space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240 12.14.1 Predicting which objects are retrieved . . . . . . . . . . . . . . . . . . . . .241 12.14.2 Predicting the size of objects to retrieve . . . . . . . . . . . . . . . . . . .241 12.14.3 Predicting the time to retrieve objects . . . . . . . . . . . . . . . . . . . . .242 12.14.4 Can an ASP overflow occur? . . . . . . . . . . . . . . . . . . . . . . . . . . . .242 12.15 Renaming and moving objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243 12.15.1 Renaming file members. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243 12.15.2 Renaming files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243 12.15.3 Renaming libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .244 12.15.4 Moving a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 12.15.5 Creating a duplicate file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 12.16 Application design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 12.16.1 Member-level archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245 12.16.2 Work-around for less suitable applications . . . . . . . . . . . . . . . . .248 12.17 Pseudo record-level archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254 12.17.1 Moving records to an archive file member . . . . . . . . . . . . . . . . . .254 12.17.2 Retrieving records and integrating into the main file . . . . . . . . . .257 12.17.3 Application changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258 12.17.4 Running queries over archived records . . . . . . . . . . . . . . . . . . . .258 12.17.5 Time stamping every record . . . . . . . . . . . . . . . . . . . . . . . . . . . .259 Chapter 13. Practical implementation of hierarchical storage management archiving capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261 13.1 What to archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261 13.1.1 Types of objects to archive for Dynamic Retrieval . . . . . . . . . . . . .261 13.2 Suggested implementations of Dynamic Retrieval. . . . . . . . . . . . . . . . .264 13.3 Using BRMS/400 for hierarchical storage management. . . . . . . . . . . . .267 13.3.1 Review of the BRMS/400 structure . . . . . . . . . . . . . . . . . . . . . . . .267 13.4 Setting up BRMS/400 for archive with Dynamic Retrieval . . . . . . . . . . .268 13.4.1 Archive lists. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268 13.5 Media classes for archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270 13.5.1 Move policies for archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270 13.5.2 Archive media policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271 13.5.3 Archive policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273 13.6 Archive control groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275 13.6.1 Scheduling the archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276 13.7 Using BRMS/400 for Dynamic Retrieval . . . . . . . . . . . . . . . . . . . . . . . .277 13.7.1 Setting retrieve policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277 13.7.2 Responding to a retrieve operation . . . . . . . . . . . . . . . . . . . . . . . .280 13.7.3 Failed retrieve operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 13.7.4 Using the BRMS/400 log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 13.8 Controlling retrieve operations using the RSMRTVBRM command . . . .283 13.8.1 Using the RSMRTVBRM command . . . . . . . . . . . . . . . . . . . . . . . .284 viii Backup Recovery and Media Services for OS/400 13.9 Administration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 13.9.1 Retrieve authority. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 13.9.2 Restore options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 13.9.3 Securing the retrieve policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Appendix A. Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 A.1 Summary of changes for V3R6 to V3R7 . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 A.1.1 Backup/recovery enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 A.1.2 Media management enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . .290 A.1.3 Command enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290 A.1.4 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291 A.1.5 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291 A.2 Summary of changes between V3R1 and V3R6 . . . . . . . . . . . . . . . . . . . . . .292 A.2.1 Backup enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .292 A.2.2 Media management enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . .293 A.2.3 Command enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294 A.2.4 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 A.2.5 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 A.3 Summary of changes from V3R1 to V3R2 . . . . . . . . . . . . . . . . . . . . . . . . . . .296 A.3.1 Backup enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 A.3.2 Media management enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . .297 A.3.3 Command enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .298 A.3.4 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300 A.3.5 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300 Appendix B. Save and restore tips for better performance . . . . . . . . . . . .301 B.1 Data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .301 B.2 Load balancing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302 B.3 Using the USEOPTBLK parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302 B.4 Additional hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302 Appendix C. Example LAN configuration for 3494 . . . . . . . . . . . . . . . . . . .303 C.1 Line description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .303 C.2 Controller description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .303 C.3 Device description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304 Appendix D. Performing restricted saves to a 3494 on CISC. . . . . . . . . . .305 Appendix E. Media missing from the 3494 . . . . . . . . . . . . . . . . . . . . . . . . . .309 Appendix F. The QUSRBRM library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 Appendix G. QUSRBRM/QA1AMM file specifications: V3R1 . . . . . . . . . . .313 Appendix H. QUSRBRM/QA1AMM file specifications: V3R2/V3R6/V3R7 .315 Appendix I. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317 Appendix J. Related publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 J.1 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 J.2 IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 J.3 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 J.4 Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320 ix How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321 IBM Redbooks fax order form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323 IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335 x Backup Recovery and Media Services for OS/400 Figures xi Figures 1. Overview of BRMS/400 operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Changing BRMS/400 license information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3. BRMS/400 main menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4. BRMS/400 functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5. BRMS/400 commands by functional areas . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6. Add Storage Location example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7. Change Storage Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 8. Changing device using BRM for V3R7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 9. Add Media Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 10. Add Media Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 11. Container class for ¼-inch cartridges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 12. Adding a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 13. Change Container showing a move policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 14. User-created move policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 15. Media management summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 16. Change Media Policy example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 17. Expiring media using the STREXPBRM command . . . . . . . . . . . . . . . . . . . . . 31 18. Changing defaults for the BRMS/400 system policy . . . . . . . . . . . . . . . . . . . . 32 19. Change Presentation Controls display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 20. Change Backup Policy display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 21. Adding and removing libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 22. Backup control group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 23. Work with Backup Control Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 24. Backup control group *SYSGRP for backing up IBM data. . . . . . . . . . . . . . . . 38 25. Default backup control group *BKUGRP for saving all user data . . . . . . . . . . 39 26. Job Queues to Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 27. Ending subsystems in the EDELM09 control group. . . . . . . . . . . . . . . . . . . . . 42 28. Restarting ended subsystems in the SAVIFS control group . . . . . . . . . . . . . . 42 29. Change Backup Control Group Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 30. Adding media using the ADDMEDBRM command . . . . . . . . . . . . . . . . . . . . . 44 31. Add Media Information to BRM display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 32. Backing up the SETUPTEST control group . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 33. BRMS/400 log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 34. Start Maintenance for BRM example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 35. Verify Media Moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 36. Work with Media Information example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 37. Work with Media example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 38. Display Backup Plan example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 39. Checking for expired media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 40. The message indicating that the request was successful . . . . . . . . . . . . . . . . 64 41. Sample backup control group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 42. User Exit Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 43. Omitting libraries from backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 44. Edit Backup Control Group Entries display: Creating an *EXIT . . . . . . . . . . . . 73 45. User Exit Maintenance display: Completed MONSWABRM command . . . . . . 76 46. Synchronizing multiple libraries with save while active . . . . . . . . . . . . . . . . . . 77 47. Save-while-active example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 48. Save-while-active example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 49. Save-while-active example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 50. Save-while-active example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 xii Backup Recovery and Media Services for OS/400 51. Save-while-active example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82 52. Including and excluding spooled file entries in backup list . . . . . . . . . . . . . . . .84 53. Backup list SAVESPLF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 54. Work with Saved Spooled Files (WRKSPLFBRM) . . . . . . . . . . . . . . . . . . . . . .85 55. Select Recovery Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86 56. Start console monitor option on the Backup menu . . . . . . . . . . . . . . . . . . . . . .88 57. Console Monitor active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 58. Submitting a system save to batch using the console monitor . . . . . . . . . . . . .89 59. Command line access from the console monitor . . . . . . . . . . . . . . . . . . . . . . .89 60. Initial program to secure the console monitor . . . . . . . . . . . . . . . . . . . . . . . . . .90 61. Console Monitor Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 62. Work with Backup Control Groups display . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 63. Add Job Schedule Entry display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92 64. Work with BRM Job Schedule Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92 65. Changing job scheduler in BRMS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93 66. Change Job Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94 67. Work with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 68. Adding a BRMS application to job scheduler for OS/400 . . . . . . . . . . . . . . . . .95 69. Sample backup control group entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 70. BRMS/400 synchronization process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99 71. Work with Configuration Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 72. WRKACTJOB display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102 73. Additional Message Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 74. Additional Message Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 75. Overview of establishing a BRMS/400 network . . . . . . . . . . . . . . . . . . . . . . .105 76. Adding a new system to the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 77. SYSTEM05 added to the network group. . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 78. Running INZBRM *NETSYS on SYSTEM05 . . . . . . . . . . . . . . . . . . . . . . . . .108 79. Network group entry on SYSTEM05 for SYSTEM09 . . . . . . . . . . . . . . . . . . .110 80. BRMS/400 networking subsystem: Q1ABRMNET . . . . . . . . . . . . . . . . . . . . .110 81. Change Network Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112 82. Removing systems from a network group. . . . . . . . . . . . . . . . . . . . . . . . . . . .112 83. Removing the old system name from the network . . . . . . . . . . . . . . . . . . . . .115 84. Confirm Remove of Network Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116 85. Incorrect way of joining two BRMS/400 networks . . . . . . . . . . . . . . . . . . . . . .118 86. Correct way to join the BRMS/400 network . . . . . . . . . . . . . . . . . . . . . . . . . .118 87. Media update to check the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 88. No update for SYS04 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 89. LAN Server/400 Integrated PC Server objects . . . . . . . . . . . . . . . . . . . . . . . .126 90. Change Link List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128 91. Examples of authority issues with IFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 92. Displaying the monitor job example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133 93. Ending the monitor job example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134 94. Work with Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 95. Change Link List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 96. Example of an *LNK list in a control group . . . . . . . . . . . . . . . . . . . . . . . . . . .139 97. Example of the *LINK list in the V3R7 control group. . . . . . . . . . . . . . . . . . . .140 98. Display BRM Log Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 99. Work with Media Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 100.Work with Link Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 101.Work with Directory Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 102.Work with Link Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 103.Work with Directory Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 Figures xiii 104.Work with Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 105.Select Recovery Items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 106.Additional Message Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 107.Work with Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 108.Select Recovery Items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 109.Work with Network Server Storage Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . 146 110.Overview of the automated tape library components . . . . . . . . . . . . . . . . . . 158 111.V3R1 or V3R2: OS/400 splits the command into MOUNT and SAVLIB . . . . 159 112.V3R6 LIC processes the MOUNT command instead of MLDD . . . . . . . . . . . 160 113.Commands pull-down window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 114.Setup Stand-alone Device window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 115.Mount complete window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 116.Stand-alone Device Status window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 117.Setup transient mode on the Setup Stand-alone Device window . . . . . . . . . 163 118.Mount from Input Station window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 119.Mount complete window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 120.Reset Stand-alone Device window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 121.Work with Active Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 122.Create Device Media Library: V3R1 and V3R2 . . . . . . . . . . . . . . . . . . . . . . . 166 123.Display Storage Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 124.Display Associated Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 125.Creating a device media library: V3R6 and V3R7 . . . . . . . . . . . . . . . . . . . . . 169 126.Configure Device Media Library - RS232 . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 127.Display LAN Media Library Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 128.Locating and selectomg resources associated with the IOP . . . . . . . . . . . . . 174 129.Work Media Library Status: V3R6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 130.Work with Media Library Status: V3R7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 131.Work with Media Library Status: Resource allocation . . . . . . . . . . . . . . . . . . 177 132.WRKMLBSTS prior to applying PTFs in a shared environment for the 3494 178 133.WRKMLBSTS after applying PTFs in a shared environment for the 3494 . . 179 134.Work with Media Library Status: TAPMLB01 and TAPMLB02 varied on . . . 180 135.Work with Media Library Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 136.Add MLB Media using BRM display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 137.Start Recovery using BRM (STRRCYBRM) . . . . . . . . . . . . . . . . . . . . . . . . . 192 138.Receive media information on SYSTEM05 . . . . . . . . . . . . . . . . . . . . . . . . . . 194 139.Selecting Recovery Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 140.Selecting Recovery Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 141.Work with Media Information (WRKMEDIBRM) . . . . . . . . . . . . . . . . . . . . . . 206 142.Work with Media Information display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 143.Select Recovery Items display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 144.Recovering individual objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 145.AS/400 objects before and after save with storage freed . . . . . . . . . . . . . . . 218 146.Vertical data splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 147.Horizontal data splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 148.Horizontal data splitting by primary key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 149.Horizontal data splitting by all keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 150.Adding objects to an archive list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 151.Add Media Class display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 152.Create Move Policy display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 153.Edit Archive Control Group Entries display . . . . . . . . . . . . . . . . . . . . . . . . . . 275 154.Add Job Schedule Entry display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 155.Selecting a device for your retrieve policy . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 156.Set Retrieve Controls for BRM display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 xiv Backup Recovery and Media Services for OS/400 157.Retrieve *VERIFY messages (Part 1 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . .281 158.Retrieve *VERIFY messages (Part 2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . .281 159.Retrieve *VERIFY messages (Part 3 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . .281 160.Retrieve *NOTIFY message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282 161.Confirm Retrieve display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284 162.Program for restricted save processing with the 3494 . . . . . . . . . . . . . . . . . .306 163.CL program to create a tape category and add volumes (Part 1 of 2) . . . . . .307 164.CL program to create a tape category and add volumes (Part 2 of 2) . . . . . .308 165.Example program to identify volume mismatches . . . . . . . . . . . . . . . . . . . . .309 166.Example query to identify volume mismatches . . . . . . . . . . . . . . . . . . . . . . .310 Tables xv Tables 1. Media scratch pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2. List of Q libraries saved by *ALLUSR or *ALLPROD in BRMS/400. . . . . . . . . 40 3. Summary of save and restore options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 4. AS/400 recovery steps (using BRMS/400 and the 3494). . . . . . . . . . . . . . . . 196 5. Dynamic Retrieval of records into main file of database records . . . . . . . . . . 257 6. BRMS/400 Dynamic Retrieval guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 xvi Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 xvii Preface This IBM Redbook preserves the valuable information from the first edition of A Practical Approach to Managing Backup Recovery and Media Services for OS/400, SG24-4840, which is based on CISC implementations. The updates in this edition were made to reflect the documentation and URL values that were available at the time of publication. This publication is unique in its detailed coverage of using BRMS/400 with tape libraries within a single AS/400 CISC system, or within multiple AS/400 CISC configurations across multiple levels of OS/400 ranging from OS/400 V3R1 to and through OS/400 V3R7. Coverage for BRMS for OS/400 for RISC and iSeries systems will be found in a redpaper that is planned for publication later in 2001. Note: At the time this redbook was written, V4R2 and earlier releases of OS/400 and BRMS were no longer supported by IBM. This redbook focuses on the installation and management of BRMS/400 using tape libraries such as IBM 9427, IBM 3494, IBM 3570, and IBM 3590. It provides implementation guidelines for using BRMS/400 to automate your save, restore, archive, and retrieve operations. It also contains practical examples of managing your media inventory across multiple AS/400 CISC systems. This redbook also identifies functional differences between BRMS/400 and OS/400 CISC releases, where appropriate. This redbook is written for customers who are familiar with the basic functions of BRMS/400 and are in the process of implementing media management and tape management solutions. This publication is also intended for IBM Business Partners, marketing specialists, availability specialists, and support personnel. Prior to reading this redbook, you must be familiar with the native OS/400 save and restore command interfaces and their options. How this redbook is organized The redbook is organized as follows: • Chapter 1, “Backup Recovery and Media Services/400 introduction” on page 1 This chapter provides an overview of BRMS/400 components and sets your expectations on the scope of this book. • Chapter 2, “Installation planning for BRMS/400” on page 7 This chapter takes you through the planning considerations when implementing BRMS/400. It takes you through the importance of naming conventions and introduces the concepts of media and media management, followed by instructions on how to install BRMS/400 on your AS/400 system. • Chapter 3, “Implementing BRMS/400” on page 17 This chapter provides information on the initial configuration and setup of BRMS/400 to become productive immediately. It provides an overview of the defaults that BRMS/400 uses for media class, media policy, backup control groups, enrolling and initializing media, and restoring saved data. xviii Backup Recovery and Media Services for OS/400 • Chapter 4, “Managing BRMS/400” on page 57 This chapter provides information on how you can tailor BRMS/400 to use additional functions and features such as saving spooled files, using the save-while-active function, and using the job scheduler through BRMS/400. It also takes you through the tasks that need to be completed to manage BRMS/400. • Chapter 5, “BRMS/400 networking” on page 97 This chapter provides an overview of managing your media inventory across multiple AS/400 systems and provides instructions on how to configure a BRMS/400 network, remove systems from a network, and merge systems within a network. It also explains how you can change the system name and media information for a system within the BRMS/400 network. • Chapter 6, “Saving and restoring the integrated file system” on page 123 This chapter starts by providing an introduction of the integrated file system, using LAN Server/400 as an example. It covers authority issues related to saving LAN Server/400 data and the considerations for saving and restoring the integrated file system data from the Integrated PC Server (FSIOP). • Chapter 7, “AS/400 hardware support for automated tape libraries” on page 151 This chapter provides an overview of the hardware configuration for certain automated tape libraries that are supported on the AS/400 CISC systems. • Chapter 8, “AS/400 software support for automated tape libraries” on page 157 This chapter discusses the software support requirements for supporting tape automation on the AS/400 system, particularly aimed at the IBM 3494 Automated Tape Library Data Server. • Chapter 9, “Implementing automated tape libraries” on page 165 This chapter discusses some of the actions required to set up automated tape libraries in BRMS/400. It also covers the functional differences between CISC and RISC releases of OS/400, in the area of automated tape library management. • Chapter 10, “Recovery using BRMS/400” on page 191 This chapter deals with the most important function of BRMS/400 – recovery. The objective of this chapter is to describe the recovery of a complete system and identify the key differences the CISC and RISC BRMS/400 releases so that you can plan accordingly. • Chapter 11, “Planning for upgrades to PowerPC AS” on page 209 This chapter lists the BRMS/400 planning considerations when upgrading your IMPI processor to PowerPC AS processor (CISC to RISC). It lists the steps you need to perform on the source (CISC) system and the target (RISC) system during the upgrade process. • Chapter 12, “Planning for the hierarchical storage management archiving solution” on page 217 This chapter provides a description of how archiving is implemented with BRMS/400 and how your data can be retrieved dynamically. It also discusses Preface xix various application design considerations to be aware of to aid the planning and design of your archive solution. • Chapter 13, “Practical implementation of hierarchical storage management archiving capabilities” on page 261 This chapter lists the type of objects that you may consider for archiving. Then, it explains how to set up BRMS/400 to produce an operational dynamic retrieval solution. • Appendix A, “Summary of changes” on page 289 This appendix provides a summary of the functional enhancements that have been made to BRMS/400 beginning with V3R1 to and through V3R7. It can help you understand the enhancements that are available for each of the releases available for CISC systems. • Appendix B, “Save and restore tips for better performance” on page 301 This appendix provides some of the hints and tips on improving your save and restore performance. • Appendix C, “Example LAN configuration for 3494” on page 303. This appendix provides sample line, controller, and device configuration for attaching the 3494 through a token-ring. • Appendix D, “Performing restricted saves to a 3494 on CISC” on page 305 This appendix provides a sample CL program that shows how you can use the 3494 for restricted state processing on CISC operating systems. • Appendix E, “Media missing from the 3494” on page 309 This appendix provides a sample query that can be used to identify volume mismatches between the BRMS/400 media inventory and the 3494 tape library inventory. • Appendix F, “The QUSRBRM library” on page 311 This appendix provides information on the BRMS/400 files in the QUSRBRM library. • Appendix G, “QUSRBRM/QA1AMM file specifications: V3R1” on page 313 This appendix provides file field specifications for the QA1AMM media management file for V3R1. • Appendix H, “QUSRBRM/QA1AMM file specifications: V3R2/V3R6/V3R7” on page 315 This appendix provides file field specifications for the QA1AMM media management file for V3R2, V3R6, and V3R7. xx Backup Recovery and Media Services for OS/400 The team that wrote this redbook The second edition of this redbook preserves the content for those customers maintaining CISC systems. The team who updated this redbook for the second edition includes: Susan Powers Senior I/T Specialist for the ITSO, Rochester Center Scott Buttel AS/400 Technical Specialist, in IBM Global Services Australia Gunnar Svensson IT Specialist in Sweden Mervyn Venter Technical Support Representative at IBM Rochester The first edition of this redbook was produced by a team of specialists from around the world working at the International Technical Support Organization Rochester Center: Amit Dave iSeries Segment Manager - Enterprise Technologies, Rochester, MN, and team leader for the first edition of this redbook Rolf Hahn from IBM Global Services, Australia Derek McBryde from IBM Svenska AB Edelgard Schittko from IBM Rochester Support Center Tony Storry IBM UK Thanks to the following development and support personnel for their invaluable contributions to this project: David Bhaskaran Swinder Dhillon Tim Fynskov Paul (Hoovey) Halverson Steve Hank Dennis Huffman Ann Johnson Neil Jones Greg Krietemeyer Scott Maxson Genyphyr Novak Debbie Saugen Bill Soranno IBM Rochester Laboratory Joy Cheek Bruce Reynolds Brian Younger Merch Bacher and Associates, Oklahoma, USA The ITSO also thanks the participants of the BRMS/400 Forum for sharing their experiences with the BRMS/400 product and for providing valuable hints and tips to the BRMS/400 community. Preface xxi Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: • Fax the evaluation form found in “IBM Redbooks review” on page 335 to the fax number shown on the form. • Use the online evaluation form found at ibm.com/redbooks • Send your comments in an Internet note to redbook@us.ibm.com xxii Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 1 Chapter 1. Backup Recovery and Media Services/400 introduction You can plan, control and automate the backup, recovery, and media management services for your AS/400 systems with Backup Recovery and Media Services for OS/400 (BRMS/400). BRMS/400 contains default values so you can begin using it immediately. It allows you to define policies for backup, recovery, archive, retrieve, and media and to tailor a backup recovery and media strategy that precisely meets your business requirements. BRMS/400 can be implemented on a single AS/400 system or on multiple AS/400 systems that are in a shared network. Proper planning is the key to success, and skills are available to help you plan the hardware, media, and administrative resources needed for successful implementation and operation. This includes recovery planning, particularly disaster recovery planning, where you identify and document your critical resources and your plans to recover them. Contact your local IBM representative for more information on how IBM can help you with your planning. 1.1 Overview of BRMS/400 functions Figure 1 shows how the elements of BRMS/400 interact to provide your backup and recovery solution. Figure 1. Overview of BRMS/400 operations Policies Control Groups Job Scheduler BRMS/400 Media Inventory Backup Interim Save Files Backup Copies Delete Interim Save Files Archived Copies - Media management and tracking - Online media inventory - Version control - Media storage management - Media move management BRMS process requirements supported through policies for: - Media management requirements - User-tailored backup, archive, retrieval, and recovery operations - Library and object-level operations - Expiration/retention - Devices used - Volume movement - Interface to OS/400 job scheduler 2 Backup Recovery and Media Services for OS/400 Five basic services are provided with a provision for customizing each to your specific process needs: • Backup: A service for defining, processing, monitoring, and reporting backup operations for libraries, objects, members, folders, and spooled files. Backup control groups provide a simple way of grouping together libraries, objects, folders and documents, and directories that share common characteristics, such as: – Type of save (full or incremental) – Job queues to process – Subsystems to process – Media movement and media retention • Archive: A service for analyzing direct access storage usage, based on user-defined criteria, and offloading aged objects, folders, or spooled files to tape. The retrieve function provides for dynamic online location and restoration of data, when required. Typical types of objects you may want to archive are: – History files – Period-end data – Non-current data kept for legal reasons – Query definitions – Folders, documents, and office mail – Performance data – Spooled printer output • Recovery: A service for implementing your recovery plan. You can restore individual items or groups of saved items by date, by control group, or by auxiliary storage pool (ASP). Through single or phased recovery operations, you can restore your entire system. As well as a detailed report showing all steps required for recovery, BRMS/400 provides you with a concise report of all tape volumes needed for the recovery, including their current location. • Retrieve: A service for the automatic retrieval of archived files. This is a dynamic retrieval that is totally transparent to the user trying to access the file. • Media: A service for managing media usage on your AS/400 system. With media management, you can: – Enroll and initialize new media. – Manage media sets. – Display media contents. – Move media. – Expire media. – Duplicate media. Media management interfaces with backup, recovery, archive, and retrieve services to record and update media usage in the media inventory. For AS/400 systems in a network, you can coordinate enrollment and manage a common pool of tape volumes (scratch pool) across all systems. BRMS/400 also provides a comprehensive set of reports to assist you in your backup and recovery management tasks. Chapter 1. Backup Recovery and Media Services/400 introduction 3 1.2 Policies and control groups The backup, archive, retrieve, and recovery functions are managed and controlled by policies and control groups. Policies establish the actions and assumptions used during processing. BRMS/400 is delivered with predefined policies that you can review and change as necessary to meet your system processing requirements. Control groups define logical groups of libraries and objects that possess similar backup, retention, and recovery requirements. In addition to allowing you to define the order in which backup, archive, and recovery processing occurs, control groups also provide for special related actions such as tape loads, processing subsystems, and job queues. Control groups provide exits for user-defined processing during the backup cycle. During installation, BRMS/400 can retrieve information from your AS/400 internal configuration tables and configure defaults for your environment. For example, it automatically creates BRMS/400 device information for the tape drives that you configured on your system. You must review the default options that are selected by BRMS/400 for further changes. Chapter 2, “Installation planning for BRMS/400” on page 7, and Chapter 3, “Implementing BRMS/400” on page 17, discuss the planning and implementation aspects of BRMS/400 in more detail. 1.3 Functional enhancements with BRMS/400 releases Each release of BRMS/400 has introduced functional enhancements. If you are upgrading from a previous release of BRMS/400, you need to be aware of the changes. If you use BRMS/400 commands in user control language programs, you should be particularly aware of new or changed commands, new or changed parameters, and any changes in defaults. See Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for information on this. Information is also available in Appendix A, “Summary of changes” on page 289. We strongly recommend that you also review Automated Tape Library Planning and Management, SC41-5309, for details on significant enhancements in the areas of tape automation. For example, Version 3 Release 1 (V3R1) of BRMS/400 for CISC processors saw enhancements on Dynamic Retrieval and improvements in BRMS/400 networking. It saw the introduction of the chargeable OS/400 Media and Storage Extensions (QMSE) feature for OS/400. Communications for 3494 Automated Tape Library Data Server was integrated into OS/400 and new media library commands were introduced. Other commands were changed. For example, Confirm Moves using BRM (CFMMOVBRM) was changed to Verify Moves using BRM (VFYMOVBRM); Save Recovery using BRM (SAVRCYBRM) was changed to Save Media Information using BRM (SAVMEDIBRM). Version 3 Release 6 (V3R6) of BRMS/400 for RISC processors represents the total integration of tape automation. Media library devices are now fully functional devices with configurations and resources. All of the OS/400 commands for tape and cartridges use the media library (MLB) device. The 3494 Media Library 4 Backup Recovery and Media Services for OS/400 Device Driver (MLDD) application and the corresponding subsystems are not required. Additional enhancements have also been made to BRMS/400 in the areas of backup functions and media management. Version 3 Release 2 (V3R2) of BRMS/400 for CISC processors has many of the features that were available in V3R6 but retains its identity with V3R1. The functions are equivalent to those provided with the BRMS/400 V3R1 release. Version 3 Release 7 (V3R7) of BRMS/400 for RISC processors includes the enhancements that were made with BRMS/400 V3R2. For example, BRMS/400 supports the enhancements made under OS/400 save and restore commands to use optimum block size for significantly improving the save and restore performance using an IBM 3590 tape drive. Version 4 Release 1 and Version 4 Release 2 (V4R1 and V4R2) of BRMS/400 for RISC processors includes enhancements to support generic folder names for backup and supports large tape file sequence numbers up to 16,777,215. It allows the ability to omit an ASP from a backup, an *ERR keyword on select commands to help identify the objects in error on a backup, and other command and menu improvements. Version 4 Release 3 (V4R3) of BRMS/400 for RISC processors includes support for Hierarchical Storage Management functions, such as migration, archiving, and retrieval across storage layers, and the ability to use the AS/400e as an ADSM/400 client. Version 4 Release 4 (V4R4) and Version 4 Release 5 (V4R5) of BRMS/400 for RISC processors includes a re-packaging of the product options, support for parallel save, support for online backup of Domino servers, and the introduction of functional usage models. A rich portfolio of functions is now available from the OS/400 and BRMS/400 combinations. It is a challenging portfolio for those in the process of migration, for those who have mixed levels of software in a network, and for those who are introducing new media types and having them coexist with the existing types. At times, it can be difficult to remember the enhancements made in every release. One way you can be certain of enhancements within a particular release of BRMS/400 is to understand the actual release cycle. For example, V3R7 provides functional equivalency with V3R2 and contains additional enhancements. Likewise, V3R2 provides functional equivalency with V3R6, and contains additional enhancements. You can draw similar comparisons for V3R6 and V3R1. We strongly recommend that you move to the latest BRMS/400 release to achieve the most benefits from the significant enhancements that BRMS/400 offers. Hint Chapter 1. Backup Recovery and Media Services/400 introduction 5 1.4 Scope of this book This redbook recognizes the challenges of having multiple BRMS/400 releases within a network and aims to provide pointers to areas where special focus is needed. The authors have made an attempt to pull together the threads of the overall picture. However, it is not the objective of this redbook to paint the picture itself. For more detail and for self-education, you are still asked to refer to the BRMS/400 manuals and other information that has already been published. We have taken BRMS/400 V3R1 as the starting level and assume that most people are already familiar with the BRMS/400 functions. We have not addressed V2R3 or V3R0M5 because these releases of OS/400 were no longer supported by IBM at the time this redbook was published. Most of the examples documented in this book are primarily based on V3R2, V3R6, or V3R7 releases of BRMS/400. Note: We intend to update all the relative information in this publication to V4R5 at a later date. In writing this book, we assume that you have a working knowledge of the basics of BRMS/400. The redbook attempts to focus on areas that are not so familiar such as automated tape libraries, managing BRMS/400, networking BRMS/400 for media synchronization, the integrated file system, and automated recovery. 6 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 7 Chapter 2. Installation planning for BRMS/400 Implementing an effective and practical backup, archive, recovery, and retrieval strategy requires considerable planning and management efforts. In general, the strategy that you develop and use for your backup is dictated by your plans for recovery. This chapter addresses the planning considerations for BRMS/400 along with details on how to install BRMS/400 on your AS/400 system. For additional planning information on backup on recovery functions, you should also consult Backup and Recovery - Basic, SC41-4304. You also need to be aware of the various functional enhancements that have been made to the BRMS/400 releases since V3R1. See Appendix A, “Summary of changes” on page 289, for additional information. 2.1 Before you begin Before you begin using BRMS/400, review your backup and recovery strategy. If you have not used BRMS/400 before, review your skills requirements and education and training opportunities available to you. Read the implementation considerations in the following sections of this redbook. 2.1.1 AS/400 systems Review where BRMS/400 is going to be installed. Even if you are planning to install BRMS/400 on a single system initially, we strongly recommend that you plan as if you were implementing a BRMS/400 solution across multiple AS/400 systems. Your machine type (that is, CISC or RISC processor) and your OS/400 release are also important for planning considerations. Some of the important tasks that you should consider are: • Is the system name going to change? Many installations retain the S44XXXXX system name that was shipped with their system. While this is a perfectly valid system name, it is less manageable than, for example, SYSTEM01, SYSTEM02, and so on. BRMS/400 caters to changes in a system name. However, updating the media information on every system in a large network to reflect the new name can be a significant task. We, therefore, recommend that if you intend to change your system names, make the change prior to loading BRMS/400. If you plan to have a network of AS/400 systems, ensure that the system names appropriately identify them within your organization. • If you are installing BRMS/400 on a new system, we recommend that you have the latest OS/400 release (V3R2 for CISC processors, V3R7 for RISC). These releases provide you with the latest BRMS/400 enhancements. See Appendix A, “Summary of changes” on page 289, for details on the enhancements. • If you have an automated tape library (ATL), understand how it will be shared between multiple systems. You also need to understand how the systems share tape media and make provisions to have sufficient media in the shared scratch pool. 8 Backup Recovery and Media Services for OS/400 • One of the strengths of BRMS/400 is its ability to manage media inventory on a single AS/400 system or multiple AS/400 systems. To achieve this, you must have unique volume identifiers in your media inventory. See 2.1.3, “Media naming convention”, for more information. 2.1.2 Media In addition to strategies for save and restore, you should have a strategy for media to use for your save and restore. This should include the number of copies of your saved objects that you keep, where you keep these copies, and which media to use. It ensures that, in the event of a backup being unavailable or unreadable, you can restore the system from another copy. You should consider keeping at least one of these backups off-site to protect your data in the event of a major disaster, such as fire or flood, at your main site. 2.1.3 Media naming convention To successfully manage all of your media volumes either on a single AS/400 system or on multiple AS/400 systems, it is vital that you have some thoughts on how you are going to name your media. BRMS/400 tracks your media volumes by their volume identification and duplicate media volumes within a BRMS/400 network can create problems. Even if you plan to install BRMS/400 on a single system initially, it is important that you allow for a potential networking of AS/400 systems using BRMS/400. The following items will help you design standards for your media volumes: • Scratch pool: With a scratch pool, tapes are not allocated to specific sets. When a tape is required for output, any available scratch tape can be used. This requires that you keep an inventory of all tapes so that available tapes can be identified. The advantage is that tapes do not need to be allocated in advance. If the inventory is well managed, tape usage can be balanced rather than some tapes being used more than others. You can control the retention periods down to the file level on the tape. A scratch pool is easily managed by BRMS/400, which is the preferred option. Table 1. Media scratch pool • Numbered volume identifiers: Since customized tape labels are more expensive than standard numeric labels, you may assign a range of numbers based on the number of systems that you have in your enterprise as follows: 1000 through to 1999 SYSTEM01 2000 through to 2999 SYSTEM02 A1001 A8276 A3456 A1223 A1234 A4356 A2376 A6453 A6778 A3450 A4390 A5697 A3432 A0976 A0124 A3211 A2144 A7666 A3323 A8909 A7366 A0343 A4432 A2390 A5466 A3345 A3333 A5444 A1111 A2232 A2222 A4443 A5678 A7654 A6543 A4321 A9876 A2109 A1098 A1087 . . . . Annnn Note: Select any tape from the scratch pool. Chapter 2. Installation planning for BRMS/400 9 3000 through to 3999 SYSTEM03 4000 through to 4999 SYSTEM04 5000 through to 5999 SYSTEM05 6000 through to 6999 SYSTEM06 .... and so on • Alphanumeric volume identifiers: This approach allows you to prefix your volume identifiers with some alphabetic characters that are meaningful to the system or applications that run on it (for example, multiple warehouses running on multiple systems). xx1000 through to xx1999 SYSTEM01 xx2000 through to xx2999 SYSTEM02 xx3000 through to xx3999 SYSTEM03 xx4000 through to xx4999 SYSTEM04 xx5000 through to xx5999 SYSTEM05 xx6000 through to xx6999 SYSTEM06 .... and so on Here, xx identifies your system. With this approach, you may not have the same issues of duplicate volume identifiers, but labeling (for use in a tape library) may become expensive. When you physically label cartridges for the 3494 Automated Tape Library Data Server, you add an E as the suffix (seventh character) to the enhanced capacity 3490 cartridges and a J to the 3590 cartridges. Within BRMS/400, you do not have to create special volume identifiers for these types of cartridges. BRMS/400 automatically adds the suffix during media enrollment. 2.1.4 Storage locations Storage locations identify where your media resides throughout its life-cycle. One example is to have a storage location of OFFSITE. The purpose of taking an offline copy of your system and applications is to protect against a major failure. Save files in a user auxiliary storage pool (ASP) do not protect if your entire AS/400 configuration is affected. Keeping your offline tapes This technique may not be suitable if there are plans to merge two enterprises that adopt the volume naming conventions described in the preceding example. Note If you already use this system, you can change to a scratch pool without renaming the media. With the scratch pool, any AS/400 system in the BRMS/400 network can use an expired volume so your volumes may not always get used by the system to which they were originally assigned. This should not concern you, since within a BRMS/400 network, the media information is shared across all of the AS/400 systems that are participating in the network. Most importantly, you have a unique volume in the media inventory that you can track and manage using BRMS/400. Note 10 Backup Recovery and Media Services for OS/400 in a rack next to the AS/400 system may be fine for retrieving them quickly, but a fire or flood in the computer room can affect these as well as your online data. Even a fire-proof safe or vault close to the computer room cannot guarantee a fully-protected environment in the event of an explosion or major fire. Therefore, you should plan to have at least one copy of your backups stored off-site. You should consider two off-site copies (in different locations) for your most critical objects. Moving media between storage locations can be scheduled on a daily basis. However, if you use a specialist service to move your media, you may have agreed to a schedule other than the recommended daily schedule. In this case, use the Calendar for Move days in the BRMS/400 move policy to ensure that media moves are scheduled to correspond with the collection schedule. Note: Do not forget to include a copy of your updated recovery report with the media. It is also a good idea to keep a copy of your recovery procedures off-site with the media. This ensures that you have procedures to follow even if your main site has been destroyed. 2.1.5 Tape drives and media types There are many different media types and associated devices on an AS/400 system that can be used for storing offline copies. The most common media types are: • A ¼-inch cartridge • A ½-inch cartridge Chapter 2. Installation planning for BRMS/400 11 • A ½-inch reel • An 8 mm cartridge You should consider both the device and media type that you require for your backups. Each has its own characteristics, and you should decide which to use for different types of saves. Generally, for backup purposes, you use the fastest and most dense media. For systems with a large amount of disk storage, you probably require the speed and capacity of a 3590 tape device. 2.2 Installing BRMS/400 Before you start the installation of BRMS/400 on your AS/400 system, be sure to check that your system has the latest program temporary fixes (PTFs). You must also have access to Informational APARs that contain the latest hints and tips related to either BRMS/400 or automated tape library installation. Informational APAR II09772 is the master index for all of the Informational APARs related to BRMS/400. You should download this APAR and any subsequent APARs that you may feel are relevant for your installation. This information is readily available through the Internet and from the AS/400 home page. If you do not have access to the Internet, contact your IBM Support or Service Representative to assist you with the information. To access information on PTFs from the AS/400 Internet home page, follow these steps: 1. Go to http://www.as400.ibm.com/ using your Web browser. 2. Select the Support fast path from the options. 3. Select AS/400 under Integrated mid-market business servers. This takes you to the iSeries and AS/400 Technical Support home page. 4. Select the Technical Information and Databases fastpath. 5. Select the Authorized Problem Analysis Reports (APARS) fastpath. This takes you into a multiple selection display for APARs. 6. Select the All APARs by Component fastpath. This gives you a list of licensed program products by their release. 7. Select the 57XXBR1 - BRMS/400 component fastpath to review all the PTFs and APARs related to the product for your appropriate release of BRMS/400. Before you begin installing BRMS/400 on your AS/400 system, make sure you have: Your SAVSYS activity is restricted by your alternate IPL device. You must also consider whether you need to be able to read your offline backups on another system and what limitations that may impose. Hint 12 Backup Recovery and Media Services for OS/400 • Appropriate documentation. At a minimum, you should have the latest copy of: – Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). – Automated Tape Library Planning and Management, SC41-5309, if you have a library device. – AS/400 Road Map for Changing to PowerPC Technology, SA41-4150, if you are planning to upgrade from CISC to RISC. • 57xxSS1 Option 18 - Media and Storage Extensions (MSE) is installed on your system. Use GO LICPGM and option 10 (Display installed licensed programs) to verify. MSE is a prerequisite for using BRMS/400. It should be installed using option 1 on the LICPGM menu or by using the Restore Licensed Program (RSTLICPGM) command. If this feature is not installed, you receive messages in the job log (CPD3D91 and CPF9899) indicating that the save did not complete. Once BRMS/400 is successfully installed, it registers two exit programs in the registration information. If you install MSE after you install BRMS/400 licensed programs, it is necessary to issue the following command: INZBRM OPTION(*DATA) This automatically registers the exit programs. You can verify the registration by entering the Work with Registration Information (WRKREGINF) command. Then, check the following exit points and exit programs by selecting option 8 (Work with exit programs) for these entries: Exit Point Exit Program Library ---------- ------------ -------- QIBM_QTA_STOR_EX400 Q1ACSX QBRM QIBM_QTA_TAPE_TMS Q1ARTMS QBRM • BRMS/400 licensed program: Also the latest cumulative PTF package and the latest BRMS/400 PTFs. • Library QSYS2 in your system library list. Use the Work with System Values (WRKSYSVAL QSYSLIBL) command to check, and add the QSYS2 library to the system library list, if required. • The correct authorization to your user profile. You need QSECOFR special authority. • BRMS/400 user license details. You do not need this to install BRMS/400, but you do need it afterwards to enroll your media as “users”. You need to change license information before you can use any media through BRMS/400. • AS/400 Media Library Device Driver (MLDD - 5798RZH) installed with the latest PTFs for MLDD. MLDD is only required if you are using the 3494 Automated Tape Library Data Server with OS/400 V3R1 or V3R2 (CISC-based processors). It is not required for AS/400 with PowerPC technology with V3R6 or V3R7. For additional information about MLDD installation and setup on your AS/400 system, see IBM 3494 User’s Guide: Media Library Device Driver for Application System/400, GC35-0153. Note Chapter 2. Installation planning for BRMS/400 13 If BRMS/400 is not already installed on your system, enter GO LICPGM on the command line and select option 11 to install the Licensed Program Product. Alternatively, you can use the Restore License Program (RSTLICPGM) command to install BRMS/400. After the licensed program is successfully installed, you need to load the latest cumulative PTF package for BRMS/400 and any additional PTFs that you may have downloaded using Electronic Customer Support (ECS). This completes your BRMS/400 installation. BRMS/400 creates two libraries on your system: QBRM and QUSRBRM. The QBRM library contains BRMS/400 program objects. The installation program also copies all of the BRMS/400 commands into the QSYS library. The QUSRBRM library is used to store BRMS/400 database objects and logs, including a history of media information, user-defined control groups, policies, and other installation specific information. We strongly recommend that you include these two libraries in a backup control group to be saved for disaster recovery purposes. After you have installed BRMS/400, verify that the Allow user domain in user libraries (QALWUSRDMN) system value is set to *ALL, which is the default shipped value. This value allows user domain objects in libraries and determines which libraries on the system may contain the user domain objects *USRSPC (user space), *USRIDX (user index), and *USRQ (user queue). If this value is not set to *ALL, you must add QBRM and QUSRBRM libraries to the list of libraries specified for the QALWUSRDMN value. 2.2.1 Updating BRMS/400 license information Before you can use and manage any media through BRMS/400, you are required to update the licensing information. Use the Change License Information (CHGLICINF) command to change the license information as shown in Figure 2 on page 14. Beginning with V3R2 and V3R7, a default user profile QBRMS is shipped as part of OS/400 even if you do not install BRMS/400. This user profile QBRMS must not be deleted. The rationale behind shipping a QBRMS profile as part of OS/400 is to resolve security and authority related issues with BRMS/400 during a recovery, since BRMS/400 code is required to run before the rest of the user profiles are restored. Section 5.3.1, “Network security considerations” on page 101, discusses additional considerations related to QBRMS user profile and secured networks. Note 14 Backup Recovery and Media Services for OS/400 Figure 2. Changing BRMS/400 license information Although the BRMS/400 license is purchased in groups of 10 media, you have to enter the total number of media on this display. For example, if you have purchased a license for 20 media, you should enter 200 in the Usage limit parameter. Tape media licenses are ordered in blocks of 10, with a maximum charge for 500 tape media per basic license. If you purchased an unlimited license for BRMS/400, you should enter *NOMAX for the Usage limit parameter. Usage limit is monitored and controlled by the license management functions of OS/400. Note: If you are upgrading from a V2R3 system to a V3R1 or a later release, you must register your media using the INZBRM *REGMED command. This time stamps the media at the time the command is run. If you continued to update media on other systems in your network during this process, the updates may have an older time stamp and are ignored. Make sure that all network activity has completed before you register the media. 2.2.2 Initializing the BRMS/400 environment Although a default BRMS/400 environment is created after you install the product, we recommend that you use the Initialize BRM (INZBRM OPTION(*DATA)) command to update the BRMS/400 definitions. For example, the command checks all of your hardware changes in conjunction with media devices in between the installation of the BRMS/400 licensed program and the beginning of the setup of BRMS/400. The INZBRM command builds default control groups, BRMS/400 policies, and tables based on the characteristics of the system that is being initialized. If you are re-installing on a V3R6, V3R7, or a V3R2 system, you might choose to use INZBRM OPTION(*DEVICE). This performs the same functions as INZBRM OPTION(*DATA), as well as clearing the device and media library information. It re-initializes the BRMS/400 files only with information on the tape units that are currently configured on your system, resetting defaults as it does so. You should review these defaults if you have implemented your own specific environment for BRMS/400. Change License Information (CHGLICINF) Type choices, press Enter. Product identifier . . . . . . . > 57xxBR1 Identifier License term . . . . . . . . . . > V3 Vx, VxRy, VxRyMz, *ONLY Feature . . . . . . . . . . . . > 5050 5001-9999 Usage limit . . . . . . . . . . *NOMAX 0-999999, *SAME, *NOMAX Threshold . . . . . . . . . . . 0 0-999999, *SAME, *CALC... Message queue . . . . . . . . . *NONE Name, *SAME, *NONE, *OPSYS Library . . . . . . . . . . . Name, *LIBL, *CURLIB + for more values Log . . . . . . . . . . . . . . *NO *SAME, *NO, *YES Chapter 2. Installation planning for BRMS/400 15 You are now ready to use BRMS/400 on your AS/400 system. Before you start tailoring BRMS/400 to meet your requirements, we recommend that you become familiar with the BRMS/400 menu options, commands, and their parameters. 2.3 BRMS/400 menus and commands To start using BRMS/400, enter GO BRMS from any command line. This takes you to the BRMS main menu as shown in Figure 3. Figure 3. BRMS/400 main menu Beginning with V3R2 and V3R7, an additional option was added to the BRMS/400 main menu (option 12; Reports) as shown in Figure 3. These reports include: • Media expiration report (QP1AEP) • Media report (QP1AMM) • Media information report (QP1AHS) • Media movement report (QP1APVMS) • Media volume statistics report (QP1AVU) • Saved objects report (QP1AOD) • Link information report (QP1ADI) • Recovery activities report (QP1ARW) • Recovery analysis report (QP1ARCY) • BRMS/400 log report (QP1ALG) From this BRMS/400 main menu, you can “drill down” to the media management functions, backup, archive, recovery, retrieve, scheduling, and report analysis menus. If you select F13 from the BRMS/400 main menu, you go to some of the commonly used BRMS/400 functions as shown in Figure 4 on page 16. BRMS Backup Recovery and Media Services/400 System: SYSTEM09 Select one of the following: 1. Media management 2. Backup 3. Archive 4. Recovery 10. Scheduling 11. Policy administration 12. Reports 16 Backup Recovery and Media Services for OS/400 Figure 4. BRMS/400 functions Selecting F10 from the BRMS/400 main menu takes you to a list of all of the BRMS/400 commands grouped by functional area (Figure 5). This is the equivalent of typing GO CMDBRM on the command line. Figure 5. BRMS/400 commands by functional areas Alternatively, you can use the Select Command (SLTCMD QBRM/*ALL) command to list all of the commands in library QBRM in an alphabetical sequence. Finally, you can access BRMS/400 functions directly by explicitly entering the menu name. For example, you can enter GO BRMSYSPCY to access the System Policy Menu or the Work with Control Groups in the BRM (WRKCTLGBRM) command. BRMF Functions System: SYSTEM09 Select one of the following: 1. Move management 2. Display log 3. Work with expired media 4. Save BRM save files to tape 5. Schedule BRM maintenance 6. Restart subsystems 7. Work with job scheduler 8. Duplicate media 9. Work with active jobs 10. Work with spooled files 11. Work with system status 12. Display system operator messages CMDBRM BRMS/400 Commands System: SYSTEM09 Select one of the following: Media commands 1. Add media to BRM ADDMEDBRM 2. Add media information to BRM ADDMEDIBRM 3. Add media library media to BRM ADDMLMBRM 4. Change media using BRM CHGMEDBRM 5. Copy media information using BRM CPYMEDIBRM 6. Display duplicate media DSPDUPBRM 7. Duplicate media using BRM DUPMEDBRM 8. Initialize media using BRM INZMEDBRM 9. Move media using BRM MOVMEDBRM 10. Print labels using BRM PRTLBLBRM 11. Print media movement PRTMOVBRM 12. Print media exceptions for BRM PRTMEDBRM 13. Remove media volumes from BRM RMVMEDBRM More... © Copyright IBM Corp. 1997, 2001 17 Chapter 3. Implementing BRMS/400 This chapter describes the implementation of a BRMS/400 environment for a single AS/400 system. Special considerations about different releases of BRMS/400 and about automated tape libraries are also included. See the “BRMS/400 Overview and Installation” chapter in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for additional information. The BRMS/400 functions for archive and retrieval are not covered here because they closely follow the functions provided by the backup and recovery policies. These are covered in 13.3, “Using BRMS/400 for hierarchical storage management” on page 267. The chapter is presented in order of implementation. With some exceptions (for example, the optional sections for containers), the information created in each section is required for subsequent sections. This chapter does not cover the actual installation and configuration instructions for using automated tape libraries with BRMS/400. See Chapter 7, “AS/400 hardware support for automated tape libraries” on page 151, and Chapter 9, “Implementing automated tape libraries” on page 165, for information on using automated tape libraries with BRMS/400. Where required, this chapter highlights the importance of setting some of the parameters correctly if you have a media library attached to your AS/400 systems. These parameters are discussed during the various implementation stages throughout this chapter. You should also review the BRMS/400 enhancements that are highlighted in Appendix A, “Summary of changes” on page 289. 3.1 Getting started with BRMS/400 The following list provides an overview of the tasks that you need to complete when setting up BRMS/400. All of these tasks are discussed in detail throughout this chapter: • Storage locations • Media devices • Media library devices • Media classes • Containers • Move policies • Media policies • Default system, archive, recovery, and retrieval policies • Backup policies • Backup control groups • Enrolling and initializing media • Performing a save operation • Review status of media • BRMS maintenance and report printing • Recovery test 18 Backup Recovery and Media Services for OS/400 3.2 The building blocks of BRMS/400 As discussed in Chapter 2, “Installation planning for BRMS/400” on page 7, defining your company's backup strategy involves making decisions that reflect your company's own business policies. These decisions are implemented in BRMS/400 as follows: • What: The first decision is what to back up. This information is held in the backup control group. The timing of the backup is determined by how often you schedule the backup of each backup control group. You also need to identify any dependencies. • How: Having determined what to backup, the next task is to choose the media. This is determined by media class, which is determined by the media policy. The media policy also specifies if the data should be “staged” through a save file before being committed to the media. The media policy is specified in the attributes of the backup control group. • Where: The next decision is what to do with the media that now contains the latest backup. Typically, media is moved into a fireproof safe, to another location, or to a combination of both. The journey that media makes after it has been used until it expires and returns to the home location is defined in a move policy. The move policy is specified in the media policy. • How long: The retention period of the data (that is, until it is no longer required) is the next piece of information. This period varies. Nightly backups may need to be retained for one week, where monthly backups may need to be retained for one or more years. The retention information is specified in the media policy. Before you start implementing BRMS/400, you should decide on the naming conventions that you will use for your media policies, media classes, move polices, volume identifiers, and control groups. The naming conventions become more and more important when you use automated tape libraries along with BRMS/400. See 2.1.3, “Media naming convention” on page 8, for more information. 3.3 Storage locations Storage locations define any place where media is stored. Two storage locations are provided as defaults with BRMS/400: • *HOME: The default on-site storage location • VAULT: The default off-site storage location We recommend that you leave these defaults unchanged and create additional storage location entries to match the additional locations that you want BRMS/400 to manage. BRMS/400 refers to storage locations in several places: • System Policy: “Home Location” • Media Policy: “Storage Location” • Device Description: “Device Location” • Move Policy: “Home Location” Chapter 3. Implementing BRMS/400 19 When BRMS/400 encounters a tape that has a location error (a rare occurrence), it assigns that tape to the “Home Location” in the system policy. You can create your own location to capture any errors such as DONOTUSE. The “Storage Location” in the media policy instructs BRMS/400 where to look for a tape to perform your backup. Normally this is the scratch pool or the automated tape library, but it can also be another location. The default for the storage location parameter in the media policy is *ANY. You should review this parameter, especially if you permit media to expire in a location other than the “home” location so that BRMS/400 does not request the mount of a tape that is not even on-site. If you have media libraries, you have to be careful how you specify the storage location to ensure it only indicates tapes that are “inside” of the library. If you have more than one library, or if you have stand-alone drives as well as a library (for example, 3590 devices inside and outside a 3494 Automated Tape Library Data Server), you need to ensure that neither requests the other's media. You also need to ensure that the device description is updated to indicate its location (for example, from *HOME to MLB01). The “Home Location” on the move policy tells BRMS/400 where it should put the tape when it completes the moves in the move policy. Typically, this is the computer room or the scratch tape rack. If you use media libraries, it may be returning from the vault to the library. Some examples of storage locations are: • COMPROOM: The main tape rack in the computer room, assuming that you do not have all of your tape media in the tape library. • MLB01: Media in a tape library. • MLB02: Media in another tape library. This tape library may be located in another building. • SCRATCH: Scratch tapes only. Tapes that have expired are stored here. • VAULT: Secure off-site storage. • DONOTUSE: Tapes that are lost or destroyed, or are past their useful life, can be “tracked” here. This location does not need to exist physically. For example, if a tape with volume ID of A10005 was damaged, it is moved to the DONOTUSE location. You can use the Work with Storage Locations (WRKLOCBRM) command to display the storage locations that are defined for BRMS/400. The WRKLOCBRM command can also be used to add, change, or remove storage locations. In addition, you can work with media or containers that are in the storage locations by selecting additional parameters when using the change option for a specific storage location. If you have different types of media, you need to ensure that your System Policy Home Location can accommodate all types. We recommend that you specify a location other than the media library for the home location. If the system identifies a mismatch on the media in the tape library, you want it to be ejected and not “returned” to the library device. Hint 20 Backup Recovery and Media Services for OS/400 Figure 6 shows an example of creating a storage location called COMPROOM. When you create a storage location, it is important that you provide the required details for name, address, contact name, contact telephone number, and so on. Figure 6. Add Storage Location example There are two important field parameters that you need to set correctly: • Allow volumes to expire: Should be set to *NO for your off-site location. You could select *YES for a storage location that is physically located near the system such as the computer room or a tape library. • Media slotting: If media is to be filed and tracked by individual slot numbers at storage locations, you must specify that you are using media slotting on the Add or Change Storage Location displays. The use of media slotting is optional and can be used for some storage locations and not for others, based on your specific storage procedures. Of the two default storage locations provided (*HOME and VAULT), *HOME is set to a media slotting value of *NO. VAULT is set to a media slotting value of *YES. You should change these values to match your storage procedures. Media can be assigned a slot number when it is added to the BRMS/400 media inventory using the Add Media to BRM (ADDMEDBRM) command. Slot numbers can be changed using the Change Media in BRM (CHGMEDBRM) command. Volumes moved to a storage location that allows media slotting are automatically updated with a volume slot number for the new location (beginning with the lowest available volume slot number) unless they have been assigned a slot number previously. Add Storage Location Storage location . . . . . . . . : COMPROOM Type choices, press Enter. Address line 1 . . . . . . . . . . Building 3 Address line 2 . . . . . . . . . . 1st Floor Address line 3 . . . . . . . . . . Computer Room Address line 4 . . . . . . . . . . Tape Rack near the fire safe Address line 5 . . . . . . . . . . Contact name . . . . . . . . . . . Kris Peterson Contact telephone number. . . . . . (555) 111-2222 Retrieval time . . . . . . . . . . .0 Hours Allow volumes to expire . . . . . . *YES *YES, *NO Media slotting . . . . . . . . . . *NO *YES, *NO Text . . . . . . . . . . . . . . . Onsite safe A choice of *NO indicates that volumes whose retention period has passed (as specified in the media policy) must be transferred to a location that allows tapes to expire before the media can become eligible for reuse (scratch). Note Chapter 3. Implementing BRMS/400 21 If you chose media to be stored in containers, containers processed through a move command resulting in movement to a storage location that allows media slotting are automatically updated with a container slot number for the new location (beginning with the lowest available container slot number). Media volumes assigned to containers are not assigned volume slot numbers. See Figure 7. Figure 7. Change Storage Location For additional information, see Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). 3.4 Media devices A BRMS/400 media device entry must exist for every tape unit that BRMS/400 uses. It specifies additional controls over what can be specified in the device description, for example, if the tape drive is shared between two systems. At the time of installation, BRMS/400 determines the media libraries and tape devices on your system and develops corresponding device information entries. You should review these entries for accuracy and make any necessary changes to reflect your device specifications as shown in Figure 8 on page 22. The Work with Devices using BRM (WRKDEVBRM) command shows all of the devices and their associated type and model that are defined to BRMS/400. This command also allows you to add, change, or remove a device from a list of devices that you want to use in BRMS/400 processing. If you are adding a device, it must already be defined to the system through the device description (CRTDEVD) function. Beginning with V3R6, a new function key (F8) has been added on the WRKDEVBRM display that allows you to access the Work with Configuration Status (WRKCFGSTS) display. When you add a device, you can specify both read and write densities for that device. Most devices have the same read and write densities. However, such devices as the 3490-B40 can read lower densities, but can only write in higher densities. Change Storage Location Storage location . . . . . . . . : COMPROOM Type choices, press Enter. Container count . . . . . . . . . . 0 Number Container threshold . . . . . . . . *NOMAX *NOMAX, Number Container maximum . . . . . . . . . *NOMAX *NOMAX, Number Volume count . . . . . . . . . . . 0 Number Volume threshold . . . . . . . . . *NOMAX *NOMAX, Number Volume maximum . . . . . . . . . . *NOMAX *NOMAX, Number 22 Backup Recovery and Media Services for OS/400 Figure 8. Changing device using BRM for V3R7 The reverse bold numbers that follow correspond to the reverse bold numbers shown in Figure 8: 1 If, for example, COMPROOM is a defined location, you should change the tape devices to be at the COMPROOM location rather than the default *HOME location. If you have a media library device, such as a 3494 Automated Tape Library Data Server, the Device location parameter should contain the same name as the media library unit. 2 The Next volume message parameter specifies whether you want BRMS/400 to notify you through messages to place another tape into the device. For media libraries (MLB), this parameter should be set to *NO. 3 The Auto enroll media parameter specifies if BRMS/400 should automatically add media used in output operations to the media inventory if the operation has been done using a BRMS/400 media class and is on this device. If you specify *YES, the number of media volumes to be registered to BRMS/400 is increased. This function is not available in V3R1. 4 The Shared device support parameter allows a tape device to be shared by multiple systems. When you specify *YES for shared devices, the device is varied on when the save or restore operation begins and is varied off when the save or restore operation ends. You should leave this parameter to *YES if you are planning to share a media library device with more than one AS/400 system. If the command that you are running specifies ENDOPT(*LEAVE), the device is left in a varied on state after your request to save or restore is complete. Change Device Information Device name . . . . . . . . . . . : TAP01 Type changes, press Enter. Type . . . . . . . . . . . . . . . 6369 2440, 3422, F4 for list Model . . . . . . . . . . . . . . . 001 001, 002, F4 for list Allow densities: Read . . . . . . . . . . . . . . *DEVTYPE *DEVTYPE, F4 for list Write . . . . . . . . . . . . . . *DEVTYPE *DEVTYPE, F4 for list Device location . . . . . . . . . 1 *HOME Name, F4 for list Next volume message . . . . . . . 2 *YES *YES, *NO Tape mount delay . . . . . . . . . *IMMED *IMMED, 1-999 Auto enroll media . . . . . . . . 3 *SYSPCY *SYSPCY,*NO, *YES Shared device . . . . . . . . . . 4 *NO *YES, *NO Shared device wait . . . . . . . . 30 Seconds Device uses IDRC . . . . . . . . . *NO *NO, *YES Use optimum block size . . . . . 5 *NO *NO, *YES Transfer rate per second . . . . . *DEVTYPE *DEVTYPE, Number nnnnn.nn Unit of measure . . . . . . . . . 1=MB, 2=GB Text . . . . . . . . . . . . . . . Entry created by BRM configuration - * QIC2GB Chapter 3. Implementing BRMS/400 23 5 The Use optimum block size parameter is available with V3R7 and can improve performance significantly. However, the tape volume produced is only compatible with devices that support the block size used (256 KB). Currently, the IBM 3570 and 3590 are the only tape devices that support the increased block size and, therefore, support this parameter. You should consider the following restrictions when you specify *YES for this parameter: • There are restrictions caused by the AS/400 operating system's inability to duplicate tape when the output tape device uses a block size that is smaller than the size of the blocks being read by the input tape device. • If the target release is prior to V3R7, the optimum block size is ignored because the AS/400 operating system supports this only in V3R7 and later releases of OS/400. • Choosing to use the optimum block size causes compression to be ignored. See Appendix B, “Save and restore tips for better performance” on page 301, for tips on save and restore performance. It also explains how you should set the Data compression and Data compaction parameters on the save commands when using various kinds of tape devices. 3.5 Media library device If you have a media library device (MLB), you can define the MLB to the AS/400 system through the Work with Media Libraries (WRKMLBBRM) command. You should select option 1 to add a new media library as shown in Figure 9. Figure 9. Add Media Library Library type *USRDFN permits you to define third-party media libraries. For information on third-party media libraries, refer to Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). All of the settings for devices (for example, shared device support, media library devices, vary on or vary off, allocate unprotected, and so on) depend on which libraries and which level of OS/400 are being used. See Chapter 9, “Implementing automated tape libraries” on page 165, for additional information. Note Add Media Library Type choices, press Enter. Media library . . . . . . . . . . . MLB01 Name Location . . . . . . . . . . . . . MLB01 Name, F4 for list Library type . . . . . . . . . . . *SYSTEM *SYSTEM, *USRDFN Text . . . . . . . . . . . . . . . IBM 3494 Tape Library Dataserver 24 Backup Recovery and Media Services for OS/400 3.6 Media classes Media classes define the types of physical media that are used for backup, archive, or recovery operations. Typical physical media are cartridge, reel, or any removable storage medium available on the system. Within each type of physical media, there may be a further distinction by format or capacity. At the time of installation, BRMS/400 creates media classes to match the tape devices that you have installed on your system. The Shared media parameter is set to *YES for these default media classes. You may need to create extra media classes if you have tapes that are physically different but can be read by the same tape drive. For example, a 120 MB ¼-inch cartridge is classified differently than a 525 MB ¼-inch cartridge so you create classes with meaningful names, such as QIC120 and QIC525, for each of these cartridge categories. BRMS/400 creates classes for all media types supported by the drive. The Work with Media Classes (WRKCLSBRM) command can be used to add, change, or remove media classes as shown in Figure 10. Figure 10. Add Media Class When adding a media class, you must make the text field as descriptive as possible because this field is shown on the WRKCLSBRM display. You should also consider updating the additional options that are accessed through the F10 key. Using these options simplifies the maintenance of your tape library in the future. An additional media class called SAVSYS is automatically created by BRMS/400 for the alternate IPL tape device. The Shared media prompt (highlighted in bold in Add Media Class Type choices, press Enter. Media class . . . . . . . . . . . . QIC120 Name Density . . . . . . . . . . . . . . *QIC120_ *FMT3480, F4 for list Media capacity . . . . . . . . . . *DENSITY *DENSITY, Number nnnnn.nn Unit of measure . . . . . . . . . 1=KB, 2=MB, 3=GB Mark for label print . . . . . . . *NONE *NONE, *MOVE, *WRITE Label size . . . . . . . . . . . . 1 1=6 LPI, 2=8 LPI, 3=9 LPI Label output queue . . . . . . . . *SYSPCY Name, *SYSPCY, *PRTF Library . . . . . . . . . . . . . Name, *LIBL Shared media . . . . . . . . . . . *YES *YES, *NO Text . . . . . . . . . . . . . . . QIC120 shared media class Media life . . . . . . . . . . . . *NOMAX Number of days, *NOMAX Usage threshold . . . . . . . . . . *NOMAX Times used, *NOMAX Read error threshold . . . . . . . 12500 Number (KB), *NOMAX Write error threshold . . . . . . . 1250 Number (KB), *NOMAX Uses before cleaning . . . . . . . *NOMAX Number, *NOMAX Media manufacturer . . . . . . . . Lexmark Manufacturer part number . . . . . DC6150 Compatible part number . . . . . . Media supplier . . . . . . . . . . IBM Direct Supplier representative . . . . . . Rolf Hahn Supplier telephone number . . . . . 800-426-2468 Reorder point . . . . . . . . . . . *NONE Number, *NONE Chapter 3. Implementing BRMS/400 25 Figure 10) for this media class is set to *NO because you do not want to share your SAVSYS media with other AS/400 systems. If you choose to create your own media class for a SAVSYS operation, we highly recommend that you leave the Shared media prompt set to *NO. This is because the AS/400 system is in a restricted state during a system save. The communication links are not active. Therefore, no check can be made that a shared volume is not also being selected on another system. Using a non-shared volume for SAVSYS avoids this problem. Beginning with V3R1, BRMS/400 networking provides additional protection for shared media in a shared media library. A DDM job is initiated to verify the status of the tapes any time one system goes to use a tape owned by another system. If DDM communications cannot be established (for example, when you are performing a SAVSYS operation or the communications link is not active), BRMS/400 does not use that tape and chooses another. 3.7 Container classes If media is to be stored in containers, you can specify container names and descriptions in the container management displays. Using containers is optional, and no default entries are created. Quarter-inch cartridges can be moved in a container defined by a class, called QICCASE, with a capacity of 20 cartridges. To update your container classes (Figure 11), you can use the command: WRKCLSBRM TYPE(*CNR) Figure 11. Container class for ¼-inch cartridges The Automatic unpack value (*YES) in Figure 11 breaks the link between the tape volumes and the container. The media can be used and assigned to another container. Likewise, other volumes can be assigned to the container. Automatic unpack in the container class essentially moves the volume to container *NONE when the volumes have expired. Note that if you move the container to be You should consider creating a user media class, such as USER3490, as the default media class so unscheduled saves do not interfere with the regular saves. Hint Add Container Class Type choices, press Enter. Container class . . . . . . . . . . QICCASE Name Container capacity . . . . . . . . 20 Number Media classes . . . . . . . . . . . QIC120 Class, *ANY, F4 for list QIC525 QIC2GB Different expiration dates . . . . *NO *YES, *NO Automatic unpack . . . . . . . . . *YES *YES, *NO Text . . . . . . . . . . . . . . . Quarter Inch Cartridge Tape Container 26 Backup Recovery and Media Services for OS/400 *NONE, the container is shown as expired immediately at V3R1. Beginning in V3R6, the container does not expire until after you run the Start Maintenance BRM (STRMNTBRM) command with EXPMED(*YES). You can also use the Start Expiration using BRM (STREXPBRM) command. 3.8 Containers If you created a container class, you can enroll the containers that you have. When adding the containers to the BRMS/400 database, you need to specify the container ID. This is a unique name for the container similar to the way you specify a volume ID for a tape. You specify the class to which this container belongs and also the current location of the container (Figure 12). Figure 12. Adding a container Once you have added your containers, you can use the change option to change various other parameters for your containers such as the move policy. The other values used in the container definition are changed automatically when containers are used and moved. You might want to manually change either the container status or the move policy if a different container is used than is recommended by BRMS/400. The Change Container option allows you to do this so BRMS/400 knows about any changes you make (Figure 13). Figure 13. Change Container showing a move policy 3.9 Move policy When multiple locations are used to store media for one or more AS/400 systems, BRMS/400 tracks the location of the media. You can identify when the media is moved, and reports can be produced providing a complete inventory of media held at a particular location. This is especially useful when recovering from a system failure. The BRMS/400 move policy defines the movement of media Add Container Type choices, press Enter. Container ID . . . . . . . . . . . QICCASE001 Name Container class . . . . . . . . . . QICCASE Name, F4 for list Container location . . . . . . . . *HOME Name, F4 for list Change Container Container ID . . . . . . . . . . : QICCASE001 Container location . . . . . . . : *HOME Type changes, press Enter. Container class . . . . . . . . . . QICCASE Name, F4 for list Container status . . . . . . . . . *OPEN *OPEN, *CLOSED Volume count . . . . . . . . . . . 0 Number Last moved date . . . . . . . . . . 8/17/00 Date, *NONE Expiration date of media. . . . . . *NONE Date, *NONE, *PERM Move policy . . . . . . . . . . . . MOVVAULT Name, F4 for list Slot number . . . . . . . . . . . . 2 Number Chapter 3. Implementing BRMS/400 27 between storage locations and the length of time that the media stays in each location. A default move policy of OFFSITE is created when BRMS/400 is installed. You may want to modify this move policy or create a new one. For example, if you want to create a new home location of COMPROOM to represent your computer room tape rack, a secure location of FIRESAFE to hold the media for five days, and an off-site location of VAULT, you can create a move policy as shown in Figure 14. COMPROOM, FIRESAFE, and VAULT are all storage locations that are already defined in BRMS/400 using the WRKLOCBRM command. In this case, the home location is COMPROOM. Once you save data on the tape, it is moved to the FIRESAFE. Five days later, the tape is moved to the VAULT. The tape stays in the VAULT until it expires. Once the tape expires, it is returned to COMPROOM for re-use. Figure 14. User-created move policy The reverse bold numbers that follow correspond to the reverse bold numbers shown in Figure 14: 1 It is good practice to create your own “home” location for media. When BRMS/400 detects an error in media movement, or when there is an anomaly (for example, if the move policy for active media is accidentally deleted), BRMS/400 moves the tape to default *HOME location as defined by the system policy. Media found in the *HOME location can be easily distinguished from normal moves to the storage location specified in the move policy. 2 You can confirm media moves automatically or manually for each move policy. If you choose to confirm media moves automatically, BRMS/400 performs this task for you when you set the Verify moves parameter to *NO. By setting the parameter to *NO, the media is moved immediately as far as BRMS/400 is concerned, although it may not have physically moved to the new location. If you choose to confirm the media moves manually, you are supplied with a Verify Media Movement display to confirm that media movement, scheduled by BRMS/400 according to this move policy, is complete. You leave the Verify moves parameter to *YES, which is the default. The decision to confirm moves comes from two points: Create Move Policy SYSTEM09 Move policy . . . . . . . . . . MOVECOM Home location . . . . . . . . . COMPROOM 1 Name, *SYSPCY, F4 for list Use container . . . . . . . . . *NO *YES, *NO Verify moves . . . . . . . . .2 *YES *YES, *NO Calendar for working days . . . *ALLDAYS Name, *ALLDAYS, F4 for list Calendar for move days . . . . . *ALLDAYS Name, *ALLDAYS, F4 for list Text . . . . . . . . . . . . . . SYS9 - Offsite storage at the Vault Type choices, press Enter. Seq Location Duration 10 FIRESAFE 5 20 VAULT *EXP 28 Backup Recovery and Media Services for OS/400 • The experience of the operators. If operators are not experienced, move confirmation ensures that operations personnel move the required volumes to meet the requirements of your backup and recovery plan. Note: A tape volume only appears on the Verify Media Moves display after the Move Media using BRM (MOVMEDBRM) command is run. See 4.1.5, “Moving media” on page 60, for additional information on this command. • The number of volumes being moved daily. If many volumes are to be moved daily, performing movement confirmation can be tedious for every volume. We recommend that you leave the Verify move parameter set to *YES until you are completely confident that media is also physically moved to the new location, as indicated by the move policy. There is no step defined in the move policy to return media to the home location. When the move pattern is complete, the media moves to the home location defined in the move policy. The ability to return to home location is important, for instance, in the case of a media library device (MLB), where tapes are only written to the MLB itself. For additional information on using the Calendar options within a move policy, see Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for your appropriate release. 3.10 Media policy The key to a successful implementation of BRMS/400 is the media policy. As shown in Figure 15, the media policy ties together much of the required information to implement BRMS/400. The media policy combines the media management characteristics and defines the retention of the data that is being saved. When saving through BRMS/400, you have to specify a media policy. The media policy directly defines the type and length of retention for data saved on media. It also references the media class and move policy to be used for the save. • The value you specify in the Duration field is important when you create a move policy. Besides being able to enter the number of days or a specific date that you want to keep the media in that particular location, you can also specify *EXP (expire media) or *PERM (permanent retention in that location). Move policy entries after a *PERM entry are ignored for move processing since move policies move only active volumes that are not assigned a permanent storage location. If you want to retain the volumes permanently for audit records, you should specify *PERM in the duration field. • If you are planning to use APPEND(*YES) as part of your backup policy, you must make sure that the move policy keeps the tape on-site for enough days. See 3.13.1, “Appending to media rules” on page 44, for details on how BRMS/400 selects volumes for append processing. Hint Chapter 3. Implementing BRMS/400 29 Figure 15. Media management summary We recommend that you create a media policy for every combination of retention, media location, media class, or move policy that you plan to use. With the installation of BRMS/400, there are three default media policies: • FULL (35 days retention) with a move policy of OFFSITE • INCR (incremental, 14 days retention) with a move policy of *NONE • ARCHIVAL (1725 days retention) with a move policy of *NONE Figure 16 on page 30 shows a change to the default media policy FULL to include the MOVECOM move policy that we created earlier. Save File Tape Device Entry - Type - Attributes Media Class - Type - Density Media Policy - Media Class - Retention - Move Policy - Save File Move Policy - Containers - Loc'n A - Loc'n B - Durations Backup or Archive Control Group Attributes - Media Policy - Devie Container Class - Media classes - Capacity Container Container Container Location A Location B *HOME 30 Backup Recovery and Media Services for OS/400 Figure 16. Change Media Policy example The Storage location parameter is particularly important when using a save command that specifies the device as *MEDCLS in the system policy. By specifying a value other than *ANY in the Storage location parameter, BRMS/400 assures that a save or a restore operation is directed to a proper devices. For example, if you have a 3490 device in the MLB and a 3490 device as a stand-alone unit, the *MEDCLS parameter in the system policy directs the save operation to the MLB or non-MLB device based on the media policy and its associated storage location value. If *ANY is specified, your save goes to any available tape device. In order for your saves to go directly to the MLB, you have to specify the location name of the MLB that you have created, such as MLB01. Change Media Policy Media policy . . . . . . . . . . : FULL Type choices, press Enter. Retention type . . . . . . . . . . 3 1=Date, 2=Days, 3=Versions, 4=Permanent Retain media . . . . . . . . . . 2 Date, Number Move policy . . . . . . . . . . . MOVECOM Name, *NONE, F4 for list Media class . . . . . . . . . . . QIC120 Name, *SYSPCY, F4 for list Storage location . . . . . . . . . *ANY Name, *ANY, F4 for list Save to save file . . . . . . . . *NO *YES, *NO ASP for save files . . . . . . . 01 1-16 Save file retention type . . . . 4 1=Date, 2=Days, 3=Permanent, 4=None Retain save files . . . . . . *NONE Date, Number, *NONE ASP storage limit . . . . . . . 90 1-99 Required volumes . . . . . . . . . *NO *YES, *NO Secure volume . . . . . . . . . . *NO *YES, *NO Text . . . . . . . . . . . . . . . SYS9 - Media Policy FULL for QIC120 Secure volume . . . . . . . . . . *NONE *NONE, 1-9999 Mark volumes for duplication . . . *NO *NO, *YES Use care if you choose versions for retention of media. For example, assume that you are saving *ALLUSR with a retention of three versions. After the second save, you delete TESTLIB from your system. The next save does not include TESTLIB and, therefore, this library never reaches the third version. Media containing this library, therefore, normally does not expire. To expire the media, you must use the Work with Media using BRM (WRKMEDBRM) command and select option 7 for the volume to expire the media. Alternatively, you can use the Start Expiration for BRM (STREXPBRM) command as shown in Figure 17. Important Chapter 3. Implementing BRMS/400 31 Figure 17. Expiring media using the STREXPBRM command 3.11 BRMS/400 policies Policies define the controls and default values for BRMS/400 and the various operational tasks required for media management and movement, backup, archive, and recovery. The seven types of policies are: • System policy • Media policy • Move policy • Backup policy • Archive policy • Retrieve policy • Recovery policy References to the default values can be easily identified by the parameter keywords as follows: • *SYSPCY: System policy • *BKUPCY: Backup policy • *ARCPCY: Archive policy Be sure to review these policies and update the values to suit your installation. They can be accessed by selecting option 11 from the BRMS main menu. For additional information, see the “Policy Administration” section in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). Archive and retrieve policies are also discussed in 12.1.1, “How archiving is done by BRMS/400” on page 217, and in 12.8, “Retrieval methods” on page 231. During the initial implementation of BRMS/400, you should review the system policy and the backup policy to ensure that the default values match your backup and recovery strategy. 3.11.1 System and backup policies The system policy is the same as a set of system values. Unless other controls are in effect, the system policy determines the default for all users. The system policy provides defaults for the following items: • Default media policy, tape device, location of media • Whether to sign off interactive users before a backup or archive function is started, or specify a list of users and devices that continue to remain active. Start Expiration for BRM (STREXPBRM) Type choices, press Enter. Active file count . . . . . . . > 0 0-999 Active file action . . . . . . . > *EXPMED *REPORT, *EXPMED File retention type . . . . . . > *VERSION *ANY, *VERSION Select creation dates: Beginning creation date . . . *BEGIN Date, *CURRENT, *BEGIN, nnnnn Ending creation date . . . . . *END Date, *CURRENT, *END, nnnnn 32 Backup Recovery and Media Services for OS/400 • List of subsystems to check before performing an IPL. If any of the subsystems in the list are active when an IPL is scheduled, BRMS/400 does not perform an IPL. • Presentation controls such as characters used for full backup, incremental backups, and defining the first day of the week. • License information and default values for displaying BRMS/400 log. For additional information and explanations for each of these items, see Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). An example of changing the system policy is shown in Figure 18. Figure 18. Changing defaults for the BRMS/400 system policy As the need for system availability increases, the window of opportunity for backup decreases. Therefore, it may be necessary to schedule backups before midnight that continue into the following morning. This presents a challenge to operations to manage the daily backup, since a portion of it will have the next day's date which has an effect on media movement and on expiration. There is also the possibility that the after-midnight media can be confused with the following evening's media. The Day start time parameter in the System Policy allows you to change the start of day from 0:00:00 to another time (for example, 06:00:00). Any media created before the time set in this parameter is treated as having been created the previous day. Therefore, this makes it much easier to run saves over midnight and keep all of the media together when performing the movements. You may want to create a special output queue for BRMS/400, such as BRMOUTQ. You can then specify the new output queue in the system policy. This way, all of your BRMS/400 related spooled files are directed to the BRMOUTQ output queue. V3R2M0 Change System Policy SYSTEM09 Type choices, press Enter. Media policy . . . . . . . . . . . . . . FULL Name, F4 for list Devices . . . . . . . . . . . . . . . . *MEDCLS Name, F4 for list Home location for media . . . . . . . . *HOME Name, F4 for list Media class . . . . . . . . . . . . . . QIC120 Name, F4 for list Sign off interactive users . . . . . . . *NO *YES, *NO Sign off limit . . . . . . . . . . . . . 30 0-999 minutes Output queue . . . . . . . . . . . . . . *PRTF Name, *PRTF Library . . . . . . . . . . . . . . . Name, *LIBL Day start time . . . . . . . . . . . . . 0:00:00 Time Media monitor . . . . . . . . . . . . . *YES *YES, *NO Shared inventory delay . . . . . . . . . 60 30-9999 seconds Auto enroll media . . . . . . . . . . . *YES *NO, *YES Chapter 3. Implementing BRMS/400 33 You may also want to change the First day of week parameter value in the Change Presentation Controls display shown in Figure 19. Figure 19. Change Presentation Controls display Most users prefer Monday as the first day of the week. Therefore, the value should be changed from SUN to MON (Monday). As with the system policy, you can also change the backup policy to tailor some of the parameters based on your backup strategy. For example, you may want to save the information that forms part of your backup history at the object level instead of the library level. You can do so by setting the Automatically backup media information parameter to *OBJ as shown in Figure 20 on page 34. The default is *LIB. Change Presentation Controls SYSTEM09 Type choices, press Enter. Character representing full backup . . . . . . . . . . . . F Character Character representing incremental backup . . . . . . . . . I Character Character representing general activity . . . . . . . . . . * Character First day of week . . . . . . . . . . . MON SUN, MON, TUE... 34 Backup Recovery and Media Services for OS/400 Figure 20. Change Backup Policy display Figure 20 shows a combination of two displays related to changing the backup policy. The numbers in reverse bold that follow correspond to those numbers in reverse bold in Figure 20: 1 The Default weekly activity parameter specifies how you are going to perform your backups during the week. The weekly activity is seven separate fields where you can enter which type of backup activity you want to occur each day. For example, if you want a full backup (similar to SAVLIB), specify “F” for that week. If you want an incremental backup (similar to SAVCHGOBJ), specify an “I” for that day. A blank indicates that you do not want to perform any backups for that particular day. 2 The Incremental type parameter specifies the type of incremental backup that you want to use. If you want to save all of the changes to the objects since the last time you performed a full backup, you have to specify the *CUML value for this parameter. This is similar to performing a SAVCHGOBJ command with default values. We recommend that you keep the default value of *CUML. If you want to save the changes to the objects since the last time you performed an incremental backup, you have to specify *INCR for this parameter. This is similar to performing the SAVCHGOBJ command with the reference date (REFDATE) and reference time (REFTIME) values. Change Backup Policy SYSTEM09 Type choices, press Enter. Media policy for full backups . . . . . *SYSPCY Name, F4 for list Media policy for incremental backups . . . . . . . . . *SYSPCY Name, F4 for list Backup devices . . . . . . . . . . . . . *SYSPCY Name, F4 for list Default weekly activity . . . . . . . 1 FFFFFFF SMTWTFS(F/I) Incremental type . . . . . . . . . . . 2 *CUML *CUML, *INCR Sign off interactive users . . . . . . . *SYSPCY *YES, *NO, *SYSPCY Sign off limit . . . . . . . . . . . . . *SYSPCY 0-999 minutes, *SYSPCY Save journal files when saving changed objects . . . . . . . . . . .3 *NO *YES, *NO Automatically backup media information . . . . . . . . . . *LIB *LIB, *OBJ, *NONE Save access paths . . . . . . . . . . 4 *YES *YES, *NO Save contents of save files . . . . . *YES *YES, *NO Data compression . . . . . . . . . . . *DEV *DEV, *YES, *NO Data compaction . . . . . . . . . . . *DEV *DEV, *NO Target release . . . . . . . . . . . . *CURRENT *CURRENT, *PRV Clear . . . . . . . . . . . . . . . . *NONE *NONE, *ALL, *AFTER Object pre-check . . . . . . . . . . . *NO *YES, *NO Append to media . . . . . . . . . . 5 *NO *YES, *NO End of tape option . . . . . . . . . . *REWIND *UNLOAD, *REWIND, *LEAV IPL after backup . . . . . . . . . . . *SYSPCY *YES, *NO, *SYSPCY How to end . . . . . . . . . . . . *SYSPCY *CNTRLD, *IMMED, *SYSPC Delay time, if *CNTRLD . . . . . . *SYSPCY Seconds, *NOLIMIT Restart after power down . . . . . *SYSPCY *YES, *NO, *SYSPCY IPL source . . . . . . . . . . . . *SYSPCY *PANEL, A, B, *SYSPCY Chapter 3. Implementing BRMS/400 35 3 The Save journal files when saving changed objects parameter specifies whether you want to save files that are being journaled (using the Start Journal Physical File (STRJRNPF) command) during your incremental saves. The default for this value is *NO, which means that you rely on your journal receivers to retrieve the changes during the recovery. We recommend that you change this default to *YES for ease of use and to reduce the number of steps that you have to complete during recovery. 4 The Save access paths parameter specifies whether you want to save access paths associated with your physical and logical files. We recommend that you save the access paths during your save operations. There are instances where you may find that the overall save operation will take considerably longer if you have access paths over large physical files. There is a tendency not to save these access paths, which can result in a tremendous loss of system availability if you were to recover the file or the system after a disaster. When you design your backup strategy, it is extremely important to understand how your saves affect your recovery. For example, when you perform full and incremental saves, you are prompted to restore your full saves first followed by incremental saves during disaster recovery. In this case, if you do not do anything, your access paths are rebuilt twice assuming that you did not save them in the first place (once during the restore of your library from full backup set and again during the restore of incremental saves). The recommendation here is to use the Edit Rebuild Access Path (EDTRBDAP) command and hold the rebuild of the access paths immediately after the restore of the full save has completed. You can then restore the incremental saves and use the EDTRBDAP command to change the sequence number. See Backup and Recovery - Basic, SC41-4304, when designing your save and restore strategy. 5 The Append to media parameter specifies whether to add data files on existing media with active files or to begin a new volume. If *YES is specified, files are written to the volume immediately following the last active file. This allows the user to maximize media usage. However, if you want to separate data on separate tapes, you should specify APPEND(*NO). See 3.13.1, “Appending to media rules” on page 44, for more information. 3.11.2 Libraries to omit from backups Whenever you specify *IBM, *ALLUSR, or *ASPnn in any backup control group, you can also list specific libraries that are omitted from the save operation. This is the simplest way to exclude any library that you do not want to save. Select option 2 from the BRMBKUPCY menu, and add or remove the libraries that you want to omit as shown in Figure 21 on page 36. Use this facility with care. As when working with a control group, it is easy to overlook the fact that you have specified omissions in the policy. Important 36 Backup Recovery and Media Services for OS/400 Figure 21. Adding and removing libraries In the example in Figure 21, all libraries beginning with TEMP are omitted from the *ALLUSR backups. Also, if you are using BRMS/400 to save data to save files, these files are placed in a library called Q1ABRMSFxx, where xx is the ASP number in which the library is placed. When a control group containing the *IBM special value is backed up to tape, this save file library is not included in the save. Typically, you use the Save Save File using BRM (SAVSAVFBRM) command to save the save files. They may also be quite large and can take much time and media to back up. Therefore, you may want to omit this library from the *IBM group using the method previously described. See 4.2.1, “Considerations for libraries that affect BRMS/400” on page 65, for information on why the QGPL, QUSRSYS, QUSRBRM, QMLD, and QUSRMLD libraries are not omitted from the backup policy. 3.12 Backup control groups The backup function is the cornerstone of the BRMS/400 product. It is the option that controls the save process, which ultimately determines how effectively a system can be restored. Careful planning is required in determining a backup strategy before using BRMS/400 (Figure 22). Work with Libraries to Omit from Backups Type options, press Enter. 1=Add 4=Remove Opt Type Library _ ________ __________ _ *ALLUSR TEMP* _ *IBM Q1ABRMSF* _ *ALLUSR QGPL _ *ALLUSR QUSRBRM _ *ALLUSR QUSRSYS _ *IBM QMLD _ *IBM QUSRMLD Chapter 3. Implementing BRMS/400 37 Figure 22. Backup control group A backup control group can be considered to be an interpretive CL program for performing backup. The advantage over a CL program is that it is easy to create, easy to change, easy to execute, and provides full error checking while maintaining the flexibility and function that a CL program offers, all without requiring CL programming skills. A save strategy for a system consists of multiple backup control groups. These backup control groups define what is backed up and when. A backup control group can include one or many of the items listed in Figure 22. For example, it can be used to back up a single library, a group of related libraries, a set of objects or folders defined by a Backup List, and certain predefined components of the system such as configuration or security data. It can also include special operations to tell the operator to load a new tape or execute an exit program. This program can send a message to operations or users, start a subsystem, or do anything you choose. As part of the backup control group, you also must define a backup activity. The backup activity identifies which days of the week the backup list performs a backup and whether the backup is a full (save entire object) or incremental (save changed object) save. You can use the Work with Control Groups (WRKCTLGBRM) command to access the backup control groups on your system (Figure 23 on page 38). Backup Control Groups (Multiple) Named Items: Library names Generic Library names Backup List Names Special Values: *SAVSYS *SAVCFG *ALLUSR *SAVSECDTA *IBM *ALLDLO *ASPnn *DLOnn *QHST *ALLPROD *SAVCAL *ALLTEST *LINK (beginning with V3R7) Special Operations: *EXIT *LOAD Backup List contains: - Objects - Folders - Spooled files - IFS directories 38 Backup Recovery and Media Services for OS/400 Figure 23. Work with Backup Control Groups 3.12.1 Default backup control groups BRMS/400 automatically creates *BKUGRP and *SYSGRP default control groups for you. The *SYSGRP control group controls backing up IBM data, where the *BKUGRP control group controls backing up user data. By running both of these backup control groups, you can save your entire system. Figure 24 and Figure 25 show the default backup items that are saved. Figure 24. Backup control group *SYSGRP for backing up IBM data Work with Backup Control Groups SYSTEM09 Position to . . . . . . Starting characters Type options, press Enter 1=Create 2=Edit entries 3=Copy 4=Delete 5=Display 6=Add to schedule 8=Change attributes 9=Subsystems to process ... Full Incr Weekly Control Media Media Activity Opt Group Policy Policy SMTWTFS Text *BKUGRP *BKUPCY *BKUPCY *BKUPCY Entry created by BRM configura *SYSGRP SAVSYS SAVSYS *BKUPCY Entry created by BRM configura In the examples shown here, both of the displays are from a V4R2 system. In V3R1, you may notice that the backup item of LINKLIST does not exist to save IFS directories. For a workaround, see 6.6, “Saving and restoring V3R1 IFS data with BRMS/400” on page 146. The LINKLIST backup item was added with V3R2 and V3R6. In V3R7, the LINKLIST item was changed to *LINK. See 6.4, “Saving IFS using BRMS/400” on page 137, for additional information on saving IFS directories with BRMS/400. Note Display Backup Control Group Entries SYSTEM09 Group . . . . . . . . . . : *SYSGRP Default activity . . . . : *BKUPCY Text . . . . . . . . . . : Entry created by BRM configuration Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *SAVSYS *DFTACT 20 *IBM *DFTACT *NO *NO Chapter 3. Implementing BRMS/400 39 Figure 25. Default backup control group *BKUGRP for saving all user data For your first backup, you should use the default backup control groups to perform a full save. With the default control groups, you are not able to hold a job and release certain job queues or subsystems, or save your spooled files. You have to either change the default control groups or create your own to tailor how you want to manage your system during a BRMS/400 save. It is important to understand that BRMS/400 does not put the system in a restricted state when it performs an *ALLUSR save. It is equally important to understand which of the “Q” libraries are considered to be user libraries when you perform an *ALLUSR or *ALLPROD save operation. Table 2 on page 40 contains a list of libraries that are considered as part of an *ALLUSR or *ALLPROD save under BRMS/400. To avoid conflicts with library locks, we recommend that you end all of the subsystems prior to starting the *BKUGRP saves. If you have an Integrated PC Server (FSIOP), you should also vary this off before you start the save. See 4.2, “Setting up your own control groups” on page 64, for additional information on creating your own control groups. Display Backup Control Group Entries SYSTEM09 Group . . . . . . . . . . : *BKUGRP Default activity . . . . : *BKUPCY Text . . . . . . . . . . : Entry created by BRM configuration Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *SAVSECDTA *DFTACT *NO 20 *SAVCFG *DFTACT *NO 30 *ALLUSR *DFTACT *NO *NO 40 *ALLDLO *DFTACT *NO *NO 50 LINKLIST *LNK *DFTACT *NO *NO 60 *EXIT *DFTACT A change was made to BRMS/400 implementation to allow for a native RSTLIB LIB(*ALLUSR) operation to work when the QGPL, QUSRBRM, and QUSRSYS libraries span across multiple volumes. The following BRMS/400 PTF is required for your appropriate BRMS/400 release: • V3R1 - SF37714 • V3R2 - SF37715 • V3R6 - SF37716 • V3R7 - SF37718 When you apply the PTF, BRMS/400 will save the QGPL and QUSRSYS libraries during the *ALLUSR or *ALLPROD save. It will no longer separate these libraries and save them ahead of other libraries. The QUSRBRM library will be saved at the end of your control group, unless it is being omitted. See the PTF cover letter for additional information. Hint 40 Backup Recovery and Media Services for OS/400 3.12.1.1 Libraries saved by *ALLUSR or *ALLPROD in BRMS/400 When you plan your overall backup strategy, it is important to know which of the “Q” libraries are saved when you use the *ALLUSR or *ALLPROD value in your backup control group. Table 2 summarizes the libraries that are saved with the *ALLUSR value, by OS/400 release. Table 2. List of Q libraries saved by *ALLUSR or *ALLPROD in BRMS/400 3.12.2 Job queue processing from control group After adding the libraries you want to omit, you can specify the job queues that you may want to hold during the control group processing. For example, you can use BRMSJOBQ to submit jobs from the control group using exits (*EXIT). BRMS/400 releases this job queue once all of the backup items specified in your control group have finished processing. From the Work with Backup Control Groups display, select F9 to go directly to the Job Queues to Process display (Figure 26). Enter the values for BRMS JOBQ as shown in Figure 26. You must ensure that the BRMS JOBQ is created on your system and that you have added a job queue entry to your default batch subsystem description. In most cases, this is the QBATCH subsystem. Library V3R1 V3R2 V3R6 V3R7 QDSNX Yes Yes Yes Yes QGPL Yes Yes Yes Yes QGPL38 Yes Yes Yes Yes QPFRDATA Yes Yes Yes Yes QRCL Yes Yes Yes Yes QS36F Yes Yes Yes Yes QUSER38 Yes Yes Yes Yes QUSRADSM No Yes No Yes QUSRBRM Yes Yes Yes Yes QUSRIJS No Yes No Yes QUSRINFSKR No Yes Yes Yes QUSRRDARS No Yes No Yes QUSRSYS Yes Yes Yes Yes QUSRV2R3M0 Yes Yes Yes n/a QUSRV3R0M5 n/a Yes Yes Yes QUSRV3R1M0 n/a Yes Yes Yes QUSRV3R2M0 n/a n/a n/a Yes Chapter 3. Implementing BRMS/400 41 Figure 26. Job Queues to Process 3.12.3 Subsystem processing from control groups You can also include a list of subsystems that you may want to shut down and restart (if required) after the backup control group has completed. In BRMS/400 V3R1, this requires some thought and sometimes using *EXIT coding because the subsystems that are stopped prior to backing up the contents of the control group are restarted again afterwards. Another area for special attention is when a control group specifies a weekly activity that, for example, excludes Mondays, and that control group is run on a Monday. Note: The subsystems are still brought down even though there is no subsequent save. Beginning with V3R6 and V3R2, BRMS/400 provides enhanced subsystem and job queue processing that addresses these challenges. It is now possible to end a subsystem in one control group, but not to restart it until a subsequent control group has been processed. This also applies to job queues to be held and released. From the Work with Backup Control Groups display, go to your backup control group and select option 9 to create a list of subsystems that you want the control group to process. Figure 27 on page 42 shows how you can end the subsystems at the start of one control group (EDELM09) and restart them when you have completed processing another control group (SAVIFS). Job Queues to Process SYSTEM09 Use . . . . . . . . . : *BKU Control group . . . . : WKLIBM09 Type choices, press Enter. Seq Job queue Library Hold Release 10 BRMSJOBQ QGPL *YES *YES 42 Backup Recovery and Media Services for OS/400 Figure 27. Ending subsystems in the EDELM09 control group Subsystems QINTER and QCMN are ended by backup control group EDELM09 1. They will remain ended after the control group has finished processing (Figure 28). Figure 28. Restarting ended subsystems in the SAVIFS control group The backup control group SAVIFS will restart subsystems QINTER and QCMN after it has finished processing 2. The backup control group SAVIFS also has an additional subsystem to end (QSERVER) and restart after it has finished processing 3. You now need to ensure that your backup control group attributes are set correctly, as per your backup and media policies. From the Work with Backup Control Groups display, select option 8 for your backup control group. This brings up the Change Backup Control Group Attributes display shown in Figure 29. Subsystems to Process SYSTEM09 Use . . . . . . . . . : *BKU Control group . . . . : EDELM09 Type choices, press Enter. End Seq Subsystem Library Option Delay Restart 10 QINTER *LIBL *IMMED *NO 20 QCMN *LIBL *CNTRLD 300 *NO 1 Subsystems to Process SYSTEM09 Use . . . . . . . . . : *BKU Control group . . . . : SAVFIFS Type choices, press Enter. End Seq Subsystem Library Option Delay Restart 10 QINTER *LIBL *NONE *YES 2 20 QCMN *LIBL *NONE *YES 30 QSERVER *LIBL *CNTRLD 3 300 *YES Chapter 3. Implementing BRMS/400 43 Figure 29. Change Backup Control Group Attributes You should change the Media policy for full backups, Media policy for incremental backups, and the Backup devices parameters to the appropriate values. In our example, we used WEEKLY09 for the media policy and *MEDCLS for the backup device values. Additional options for the backup control group attributes are discussed in 4.2.6, “Backup control group attributes” on page 69. Review these options and set them appropriately to reflect your installation requirements. 3.13 Enrolling and initializing media You can enroll media to the media inventory or initialize it for processing by using one of the following approaches: • Work with Media (WRKMEDBRM) command and select option 1 (Add) • Add Media to BRM (ADDMEDBRM) command • Add Media Library Media to BRM (ADDMLMBRM) command to add volumes to a media library (MLB) such as the 3494 Automated Tape Library Data Server Media can be enrolled into the BRMS/400 media inventory at any time. The only requirement is that the media must be known to BRMS/400 prior to any save or restore operation. To add media to BRMS/400, use the ADDMEDBRM command as shown in Figure 30 on page 44. Change Backup Control Group Attributes Group . . . . . . . . . . . . . . . . : WEEKLY09 Type information, press Enter. Media policy for full backups . . . . . WEEKLY09 Name, F4 for list Media policy for incremental backups . . . . . . . . WEEKLY09 Name, F4 for list Backup devices . . . . . . . . . . . . . *MEDCLS Name, F4 for list Sign off interactive users . . . . . . . *BKUPCY *YES, *NO, *BKUPCY Sign off limit . . . . . . . . . . . . . *BKUPCY 0-999 minutes, *BKUPCY Default weekly activity . . . . . . . . *BKUPCY SMTWTFS(F/I), *BKUPCY Incremental type . . . . . . . . . . . . *BKUPCY *CUML, *INCR, *BKUPCY Automatically backup media information . *BKUPCY *LIB, *OBJ, *NONE, *BKU 44 Backup Recovery and Media Services for OS/400 Figure 30. Adding media using the ADDMEDBRM command You can also use the ADDMLMBRM command as described in Figure 136 on page 188. You need to decide whether you want to initialize the media during enrollment. This is done using the Initialize tape parameter on both commands. 3.13.1 Appending to media rules If you are planning to use APPEND(*YES) as part of your backup control groups, or as part of your backup policy, you must ensure that the volumes are still available on-site. The rules that BRMS/400 uses when selecting a media for append are as follows: • Selection is done for all devices (media libraries and stand alone devices). For media libraries, selection is done automatically. For stand-alone drives, the BRM1472 message is issued nominating a “suitable” candidate volume or volumes. • BRMS/400 selects an active volume that matches the requesting media policies, and the volume must pass the following checks: – Same expiration date – Owned by the requesting system – Same move policy – Same secure attributes • If BRMS/400 is unable to identify a suitable volume in the previous point, it tries to find a volume with an earlier expiration date, starting with the earliest. All other tests must match. • If BRMS/400 is unable to identify a suitable volume in the previous point, it selects an expired volume from the same system. • If no expired volumes are available in the previous point, BRMS/400 selects an expired volume from another system that can be contacted through DDM if you have a media library. Add Media to BRM (ADDMEDBRM) Type choices, press Enter. Volume identifier . . . . . . . > A10001 Character value Media class . . . . . . . . . . > QIC120 NETCHK, QIC1000, QIC120... Number to add . . . . . . . . . 6 1-999 Initialize tape . . . . . . . . *NO *NO, *YES Text . . . . . . . . . . . . . . > 'Setup media for QIC120' Expiration date . . . . . . . . *NONE Date, *PERM, *NONE System . . . . . . . . . . . . . *LCL Creation date . . . . . . . . . *CURRENT Date, *CURRENT Additional Parameters Location . . . . . . . . . . . . SCRATCH *HOME, COMPROOM, LOCATION3... Slot number . . . . . . . . . . *none 1-999999, *NEXT, *NONE Last moved date . . . . . . . *NONE Date, *NONE Container ID . . . . . . . . . *NONE *NONE, TEST01 Chapter 3. Implementing BRMS/400 45 3.13.2 Media security BRMS/400 enrolled media cannot be initialized by using the native OS/400 Initialize Tape (INZTAP) command with option *NO. If you use this command, the exit program detects that you have BRMS/400 installed. It then checks to see whether the user has *SECOFR, *SAVSYS, *SERVICE, or *ALLOBJ special authority and allows the media to be initialized. If the user does not have proper authority, BRMS/400 issues the BRM1726 message indicating that the user does not have appropriate authority to initialize the media. The user is asked to use the INZMEDBRM command instead. INZMEDBRM is a BRMS/400 command, and when it is used with CHECK(*NO), it checks the BRMS/400 database to see if the media that you are trying to initialize is expired. If the media contains active files, the command fails with an error. Therefore, BRMS/400 prevents accidental initialization of active media. 3.13.3 Extracting media information from non-BRMS saves You can enroll tapes that were not created through BRMS/400 by using one of two ways. You can use the Add Media Information to BRM (ADDMEDIBRM) command, or you can use the Extract Media Information (EXTMEDIBRM) command. Both commands support file-level information only. You cannot transfer object detail information from a non-BRMS/400 created volumes using any BRMS/400 commands. This restriction is due to OS/400 not being able to support the DSPTAP command with DATA(*SAVRST) to an output file. If you require BRMS/400 to hold object detail information, you have to first restore the library and then save the library again using BRMS/400 with object details. 3.13.3.1 The ADDMEDIBRM command The ADDMEDIBRM command allows you to add library-level information to the BRMS/400 media inventory. The information gathered by this command is stored in the QA1AHS history file. This command also allows you to enter the original save date and save time, along with the number of objects that were saved in a particular library. This command requires that you to have a printout from the DSPTAP command using DATA(*SAVRST) for input. With the ADDMEDIBRM command, you have to manually enter data for each sequence number that appears on the tape or on the printout as shown in Figure 31 on page 46. Beginning with V3R7, you can perform a DSPTAP operation to an output file as long as you use *LABEL information only. The output file option is not valid for the *SAVRST option. Note 46 Backup Recovery and Media Services for OS/400 Figure 31. Add Media Information to BRM display You need to perform the following steps to record media content information using the ADDMEDIBRM command: 1. Use the DSPTAP command with DATA(*SAVRST) to produce a printout of your tape volume for reference. 2. Add your media to BRMS/400 using the ADDMEDBRM command. 3. Run the ADDMEDIBRM command. Specify the name of the tape drive where the volume is, the saved library name, the file origin, date and time of the save, and the number of objects saved. This is where you have to check your DSPTAP report listing to see how your libraries were saved, the sequence number, the number of objects that were saved, and the date and time they were saved. You have to use this command for every library or sequence number that is on the saved tape. 4. Check the media contents information after the ADDMEDIBRM command has completed using WRKMEDBRM command or WRKMEDIBRM command. 5. Move the media to the appropriate storage location. You cannot use the ADDMEDIBRM command to add media contents information for a volume that contains active files or that is not expired. The BRMS/400 recovery reports will include information so that you can use the media for recovery purposes. Add Media Information to BRM (ADDMEDIBRM) Type choices, press Enter. Volume . . . . . . . . . . . . . > A00001 Character value + for more values ______ Volume sequence . . . . . . . . > 1 1-9999 Sequence number . . . . . . . . > 1 1-9999 File label . . . . . . . . . . . *TYPE Type . . . . . . . . . . . . . . *LIB *LIB, *ALLDLO, *SAVCAL.. Library . . . . . . . . . . . . > APILIB Name File origin . . . . . . . . . . > *SAVLIB *FILE, *SAVLIB, *SAVOBJ Entry date.. . . . . . . . . . . > '02/01/01' Date, *CURRENT Entry time . . . . . . . . . . . > '10:35:00' Time, *CURRENT Expiration date . . . . . . . . *PERM Date, *PERM, *VERnnn Device . . . . . . . . . . . . . > TAP02 Name, *NONE + for more values Additional Parameters Objects saved . . . . . . . . . 1 1-999999 Objects not saved . . . . . . . 0 0-999999 Auxiliary storage pool ID . . . 1 1-16 Chapter 3. Implementing BRMS/400 47 Besides using the ADDMEDIBRM command to register non-BRMS tapes, you can also use this command to register library-level information if you have an *ALLUSR, *ALLPROD, or *ALLTEST save that aborted during the save. When BRMS/400 performs a save operation, it creates a temporary file in the QTEMP library called QA1ASLIB, which contains important post-processing information about your save, such as the save type that should be created in the media content information file. For example, a full save will create a save type of *FULL, or an incremental save will create a save type of *CUML or *INCR. This file also holds the number of objects that are saved or not saved. If your BRMS/400 save operation aborts due to a tape failure, a user error, or a system error, the QA1ASLIB file in library QTEMP will be deleted when your job ends abnormally. Therefore, the crucial post-processing of the QA1ASLIB file that updates QA1AHS file (media history records) cannot happen. BRMS/400 has no knowledge of what was saved on the tapes up to the point of failure. Without this information, and a value greater than zero in the number of objects saved field when you display media information (using the WRKMEDIBRM command and option 5), BRMS/400 cannot perform a recovery of the saved contents, and the media volumes will not appear on your recovery reports. The following options are available to circumvent this situation: • Restart your control group processing again. This may not be suitable if your save terminated after several hours and you need to make the system available to your users. • Rebuild the media information from the tape using the DSPTAP command and the ADDMEDIBRM command. This can be very time consuming. Depending on when your save job terminated, you may find that the safest and the recommended approach is to restart backup control group. If you do not have the time to restart the backup control group, and you have to release the system to the users, you can perform the following steps to create media information, after you have completed saving the remaining data from the point of failure. These steps may vary depending on how your backup This command adds records to the BRMS/400 media content information file based on the information you supply, such as the file sequence, volume, and so on. It is critical that you enter the correct information and understand the command completely before you use it. You may want to add one sequence number first and use the WRKMEDIBRM command or the WRKMEDBRM command to check the media information before you proceed with the remaining sequence numbers. Attention Although the media information is not recorded within BRMS/400 when your job terminates, the data on your saved media can still be accessed for recovery purposes using OS/400 native restore commands. You must understand the sequence in which BRMS/400 had saved your libraries to recover from the tapes. You must plan this thoroughly. Note 48 Backup Recovery and Media Services for OS/400 control groups are set up and when the save job terminated abnormally. You must thoroughly understand the entire process of verifying your media using the DSPTAP command with the WRKMEDIBRM command before you begin. a. Display the contents of all your save tapes with DATA(*SAVRST) OUTPUT(*PRINT) options. Use this report to compare the information displayed with the command: WRKMEDIBRM CTLGRP(control group name) Depending on how BRMS/400 “built” the list of libraries to be saved, it is possible that not all libraries on the tapes need to be processed by the ADDMEDIBRM command. b. Remove the history records from the WRKMEDIBRM command that show the status of *FILE, with a value of zero for the number of objects saved. c. From the WRKMEDBRM display, you need to expire the media volumes. The ADDMEDIBRM command needs expired volumes. Your data on the media volumes will not be deleted and can still be accessed using the native OS/400 restore commands. d. Use the ADDMEDIBRM command to add each sequence number from the DSPTAP report, providing information for the volume name, volume sequence number, save sequence number, file label, the type of save command that was used to perform the save, the date and time of the save, and the number of objects saved. Note: This process is time consuming, so please be patient! e. Verify the media information using the WRKMEDIBRM command. f. You should check if a move policy is attached for the media you have enrolled. If not, use the following command for your media volumes: CHGMEDBRM MOVPCY(move policy name) The MOVMEDBRM command will then initiate your move processing. g. Verify your media moves. 3.13.3.2 The EXTMEDIBRM command The EXTMEDIBRM command should allow you to extract media information from a non-BRMS/400 created tape. It gathers information at the library level. The EXTMEDIBRM command scans through a tape and builds content information for the BRMS/400 history file, without having to key in each sequence number as with the ADDMEDIBRM command. At the time this redbook was written, the EXTMEDIBRM command registered the media content information as *FILE, instead of using *FULL, *INCR, *CUML, and so on. You cannot recover data that has a save type of *FILE, with no saved objects in it. BRMS/400 recovery will be enhanced so that it will allow you to recover *FILE save types at a future date. Until then, you must not use the EXTMEDIBRM command. Important Chapter 3. Implementing BRMS/400 49 3.14 Backing up using BRMS/400 control groups You can perform a full system backup with BRMS/400 using the supplied default backup control groups *SYSGRP and *BKUGRP or by using similar user-defined control groups. The *SYSGRP control group contains the *SAVSYS and *IBM special values that save OS/400 and IBM Licensed Program Products (mostly, the Q-libraries). It also includes *SAVSECDTA and *SAVCFG data. The contents of *SAVSYS and *IBM change infrequently, usually only when: • Applying PTFs • Adding a new program product • Performing a release upgrade The *SAVSECDTA command and *SAVCFG values can be run separately and do not require restricted state processing. They should be scheduled frequently. Restricted state saves, such as the *SAVSYS save, must be run from the system console. Beginning with V3R2 and V3R6, the console monitor function allows saves to be run in a secure unattended mode. See 4.5, “BRMS/400 console monitor” on page 87, for information on how you can use console monitoring to schedule unattended saves. Prior to this function, you were unable to schedule unattended saves without security exposures. For example, if the console is left unattended, there is nothing to stop someone from issuing the ENDRQS command (ALT and SYSREQ keys) and obtaining access to a command line. Control group *SYSGRP should use a media class with the Shared media parameter set to *NO. The reason for this is because the network media inventory cannot be updated when a system is in a restricted state (communication links that are at a varied on status to manage media integrity). Selecting SHARE(*NO) prevents accidentally overwriting of active tape volumes. Control group *BKUGRP contains the special values *SAVSECDTA, *SAVCFG, *ALLUSR, *ALLDLO, and link list (*LNK, *LINK, or LINKLIST depending on the BRMS/400 release). This control group saves the non-system portion of your AS/400 system, such as user libraries, documents, and folders, and IFS directories. This control group can use media belonging to a media class with SHARE(*YES) and typically uses your fastest drive. It can be scheduled to run unattended providing there are enough expired media volumes of the correct class. You can run the STRBKUBRM CTLGRP(*BKUGRP) command interactively, or in batch, or use a job scheduler. You can also use the Console Monitor function to perform unattended saves. To invoke a backup using BRMS/400, you can issue any of the save commands such as: • Save DLO using BRM (SAVDLOBRM) • Save Folder List using BRM (SAVFLRLBRM) • Save Library using BRM (SAVFLRBRM) • Save Object using BRM (SAVOBJBRM) • Save Object List using BRM (SAVOBJLBRM) • Save Save Files using BRM (SAVSAVFBRM) 50 Backup Recovery and Media Services for OS/400 • Save System using BRM (SAVSYSBRM) • Start Backup using BRM (STRBKUBRM) Your media inventory is now managed through BRMS/400, set by the Media monitor parameter in the system policy. Although you can still use the native save commands, such as SAVDLO, SAVLIB, SAVOBJ, and so on, we recommend that you perform all of your save operations using BRMS/400 commands at all times unless there are exceptions. For example, you can use the native save commands to save objects for distribution using SNADS. You can also use the ObjectConnect commands to perform concurrent save and restore operations on your target system. The ObjectConnect method can be faster and requires less setup time. See Upgrading to Advanced Series PowerPC AS, SG24-4600, or Backup and Recovery - Basic, SC41-4304, for V3R7, for more information on ObjectConnect. Another important factor to saving your system using BRMS/400 is the availability of media in the right class. You must ensure that you have enough save media (also sometimes known as scratch volumes) before you begin the save operation. Beginning with V3R2 and V3R6, you can use the Check Expired Media for BRM (CHKEXPBRM) command to check that you have sufficient media for your backups based on the media class or media location. You can run the STRBKUBRM command for a particular backup control group. In our example, we used the backup control group of SETUPTEST that contains some user libraries for test purposes (Figure 32). We recommend that you perform a total system save using BRMS/400. Figure 32. Backing up the SETUPTEST control group 3.15 Reviewing BRMS/400 log and media status With the Display Log using BRM (DSPLOGBRM) command, you can see BRMS/400 activity and the details of your save. You can find additional information about saved objects with option 9 on the Work with Media Information (WRKMEDIBRM) display, or by selecting option 13 on the Work with Media (WRKMEDBRM) display. A sample output is shown in Figure 33 for your reference. Start Backup using BRM (STRBKUBRM) Type choices, press Enter. Control group . . . . . . . . . > SETUPTEST *BKUGRP, *SYSGRP, DEREKTEST Schedule time . . . . . . . . . *IMMED hhmm, *IMMED Submit to batch . . . . . . . . *YES *CONSOLE, *YES, *NO Starting sequence: Number . . . . . . . . . . . . *FIRST 1-9999, *FIRST Library . . . . . . . . . . . *FIRST Name, *FIRST Append to media . . . . . . . . *CTLGRPATR *CTLGRPATR, *BKUPCY, *YES... Job description . . . . . . . . *USRPRF Name, *USRPRF Library . . . . . . . . . . . Name, *LIBL, *CURLIB Job queue . . . . . . . . . . . *JOBD Name, *JOBD Library . . . . . . . . . . . Name, *LIBL, *CURLIB Chapter 3. Implementing BRMS/400 51 Figure 33. BRMS/400 log information 3.16 BRMS/400 reports and maintenance Normally, you can display which objects are saved and where they are saved through the BRMS/400 displays. You can also use the BRMS/400 displays to assist in the restore. On a single system, if the QUSRBRM library is lost as in a complete system failure, you cannot do this. For this reason, you should always have a printed Recovery Analysis report available. If you have systems in a network with OS/400 10 6/08/00 15:52:18 Position to . . . . 6/08/00 ------------------------------------------------------------------------------ Volume D09002 expired. Begin processing for control group SETUPTEST type *BKU. Selecting devices with density *QIC120 for control group SETUPTEST type *BKU. Devices TAP01 will be used for control group SETUPTEST type *BKU. Interactive users are allowed to stay active. Starting SAVSECDTA to device TAP01. All security objects saved. Save security data (SAVSECDTA) complete. Starting save of library A960103D to devices TAP01. 8 objects saved from library A960103D. Control group SETUPTEST bypassed automatic save of media information. SETUPTEST *BKU 0030 *EXIT SNDMSG MSG('Backup SETUPTEST ENDED') TOUSR(*SYSOPR) Control group SETUPTEST type *BKU processing is complete. Last run date for BRM maintenance was 06/05/00. A PTF is provided to include all CPF37xx messages in the BRMS/400 log. These messages provide information on objects that are not saved. Without these PTFs, you either have to retain object-level information on the backup control group, or review the system job log. The PTFs, which were correct at the time this redbook was published, include: V3R1 SF33794 V3R2 SF33795 V3R6 SF33797 V3R7 SF33798 Note: Providing this information may affect system performance. If you want to disable this function, you may do so by typing a '1' in position 213 of the Q1APRM data area in the QUSRBRM library. You can re-enable this function by changing position 213 of the data area back to ' ' (blank) as shown in the following example: Backup Seq Items Exit command 10 *EXIT CHGDTAARA DTAARA(QUSRBRM/Q1APRM (213 1)) VALUE('1') 20 QUSRSYS 30 *EXIT CHGDTAARA DTAARA(QUSRBRM/Q1APRM (213 1)) VALUE(' ') Hint 52 Backup Recovery and Media Services for OS/400 V3R6 or later, you can use the Receive Media Information function to maintain media content information at a central site. You can print the recovery report from this central site. The Recovery Analysis report is printed by default with the Start Maintenance for BRM (STRMNTBRM) command. The recovery analysis report can also be generated by the Start Recovery using BRM (STRRCYBRM) command. It is good practice to run these reports at the end of the daily save and to include the most up-to-date recovery analysis report with the media when you move your system backup off-site. See 10.1.1, “Synchronizing maintenance, movement, and recovery reports” on page 193, for additional information. Maintenance should be run regularly for BRMS/400 using the STRMNTBRM command. One of the ways you can ensure that the maintenance task is run is to add an exit routine in the control group. Apart from its housekeeping tasks, the maintenance job also produces reports for recovery analysis, backup activity, and expired media. These reports can also be separately produced, if required. See 4.1.4, “Performing daily checks” on page 58, for additional information. It is also possible to run media movement using the Run media movement parameter during the maintenance. However, for several reasons, particularly in a networking situation, you should avoid setting this parameter to *YES. The media movement is done separately using the Move Media (MOVMEDBRM) command. In a complex BRMS/400 environment with many daily changes, performing the STRMNTBRM command with MOVMED(*YES) can also take some time to complete (Figure 34). See 4.1.5, “Moving media” on page 60, for more information on media movement. Figure 34. Start Maintenance for BRM example Unless circumstances dictate otherwise, you should use the RMVHST(*REUSE) option to preserve the media content information until the media is reused. You may want to use Change Command Default (CHGCMDDFT) command to permanently make this change. If you decide to manually verify media movement by setting the Verify moves parameter to *YES in your move policies, you should use the Verify Media Moves (VFYMOVBRM) command (Figure 35). Start Maintenance for BRM (STRMNTBRM) Type choices, press Enter. Expire media . . . . . . . . . . *YES *YES, *NO Remove media information: Media contents . . . . . . . . *EXP *EXP, *REUSE, *NONE Object level detail . . . . . *MEDCON 1-9999, *MEDCON Run media movement . . . . . . . *yes *NO, *YES Remove log entries: Type . . . . . . . . . . . . . *ALL *ALL, *NONE, *ARC, *BKU... From date . . . . . . . . . . *BEGIN Date, *CURRENT, *BEGIN, nnnnn To date . . . . . . . . . . . 90 Date, *CURRENT, *END, nnnnn Change BRM journal receivers . . *YES *YES, *NO Print expired media report . . . *YES *YES, *NO Print backup activity report . . *YES *YES, *NO Print recovery reports . . . . . *ALL *ALL, *NONE, *RCYANL... Chapter 3. Implementing BRMS/400 53 Note: A tape volume only appears on the Verify Media Moves display after the MOVMEDBRM command is run. Figure 35. Verify Media Moves An important BRMS/400 report called “Recovering your Entire System” can be found in the spooled file, QP1ARCY, if you chose to print recovery reports. You should always produce two copies. The first copy should be kept on-site, for assistance with your recovery from media that is stored on-site. The second copy should be sent off-site, along with your media to protect against disasters. See 10.2, “Recovering an entire system (starting with lIcensed Internal Code)” on page 195, for more information on recovery. 3.17 Current status of media and save activity Once you save the various libraries, you can use the Work with Media Information (WRKMEDIBRM) command to review your save activity as shown in Figure 36 on page 54. This display can also be used as a starting point for restoring objects or working with media on which the objects are saved. Verify Media Moves SYSTEM09 Type options, press Enter. Press F16 to verify all. 1=Verify 4=Cancel move 9=Verify and work with media Volume Creation Expiration Move Opt Serial Date Date Location Date Container D09R01 6/07/00 6/07/00 COMPROOM 6/08/00 *NONE D09R45 5/29/00 *VER 002 VAULT 6/08/00 *NONE 1 D09002 6/08/00 *VER 002 COMPROOM *VER 002 *NONE D09003 5/29/00 *VER 002 VAULT 6/08/00 *NONE D09004 5/29/00 *VER 002 VAULT 6/08/00 *NONE 54 Backup Recovery and Media Services for OS/400 Figure 36. Work with Media Information example It is worth noting that the WRKMEDIBRM display shows the most recent entries by save date and time on the display. That is, it positions itself at the bottom of the list. You must page back to see earlier backup activity. You can also produce a report by specifying OUTPUT(*PRINT) for the WRKMEDIBRM command. You can also use the WRKMEDBRM command to display or print the current status of your media inventory as shown in Figure 37. You can selectively display or print volumes that are active, expired, or both. You can use this display to change the media class of your tapes or display the contents of your tapes. This display can also be used to list the tapes that have expired and are available for re-use. Work with Media Information SYSTEM09 Position to Date . . . . . Type options, press Enter. 2=Change 4=Remove 5=Display 6=Work with media 7=Restore 9=Work with saved objects Saved Save Volume File Expiration Opt Item Date Time Type Serial Seq Date FLR 5/29/00 18:48:08 *FULL D09003 5 *VER 002 LINKLIST 5/29/00 18:49:03 *FULL D09003 + 6 *VER 002 QDOC 6/03/00 9:16:38 *FULL *SAVF 6/04/00 QDOC 6/03/00 9:16:50 *FULL *SAVF 6/04/0 MCBRYDC 6/06/00 15:12:17 *FULL *SAVF 7/11/00 QUSRBRM 6/06/00 15:12:36 *QBRM *SAVF 7/11/00 MCBRYDC 6/07/00 16:38:12 *FULL D09R01 1 6/07/00 QUSRBRM 6/07/00 16:38:48 *QBRM D09R01 2 6/07/00 *SAVSECDTA 6/08/00 15:49:31 *FULL D09002 1 *VER 002 A960103D 6/08/00 15:51:24 *FULL D09002 2 *VER 002 If your backup control group processing ends abnormally, you may find that some of the entries for the Type value in the Work with Media Information display is set to *FILE. When you display these entries, they will have a value of zero for the number of objects saved. At present, BRMS/400 does not allow for *FILE entries to be recovered, and you media volumes will not appear on the Recoverying Your Entire System report. We recommend that you restart the control group save again. See 3.13.3.1, “The ADDMEDIBRM command” on page 45, for more information on recovering from control groups that have terminated abnormally. Note Chapter 3. Implementing BRMS/400 55 Figure 37. Work with Media example You should also use the Display Backup Plan (DSPBKUBRM) command to display a summary of all of your backup control groups that you set up and the backup items that you specified for each of your backup control groups as shown in Figure 38. Figure 38. Display Backup Plan example 3.18 Restoring data using BRMS/400 Finally, you should test whether you can restore information that you have saved using BRMS/400. We recommend that you test a full restore. See 10.2, “Recovering an entire system (starting with lIcensed Internal Code)” on page 195, for additional information. Work with Media SYSTEM09 Position to . . . . . . Starting characters Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 6=Work with media set 7=Expire 8=Move 10=Reinitialize ... Volume Creation Expiration Move Media Dup Opt Serial Expired Date Date Location Date Class Sts D09003 + 5/29/00 *VER 002 VAULT 6/09/00 QIC120 * D09004 + 5/29/00 *VER 002 VAULT 6/09/00 QIC120 * D09098 *YES 5/31/00 5/31/00 COMPROOM 6/07/00 QIC120 * D09099 5/29/00 *VER 002 VAULT 6/05/00 QIC120 * D99999 *YES 5/31/00 *NONE *HOME *NONE QIC120 SYSM05 *YES 6/04/00 *NONE *HOME *NONE NETCHK SYSM09 *YES 6/04/00 *NONE *HOME *NONE NETCHK Review Backup Plan SYSTEM09 Weekly Incremental Full Retain Control Save List Activity Media Media Object Group Item Type SMTWTFS Policy Policy Detail *BKUGRP *SAVSECDTA FFFFFFF FULL FULL *NO *SAVCFG FFFFFFF FULL FULL *NO *ALLUSR FFFFFFF FULL FULL *NO *ALLDLO FFFFFFF FULL FULL *NO *EXIT FFFFFFF FULL FULL *SYSGRP *SAVSYS FFFFFFF SAVSYS SAVSYS *IBM FFFFFFF SAVSYS SAVSYS *NO RHAHN RHAHN *OBJ FFFFFFF DAILY DAILY *NO RHAHN2 *OBJ FFFFFFF DAILY DAILY *NO BRMTEST *OBJ FFFFFFF DAILY DAILY *NO A960103A A960103B FFFFFFF REEL REEL *YES A960103C FFFFFFF REEL REEL *YES A960103D FFFFFFF REEL REEL *YES More... 56 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 57 Chapter 4. Managing BRMS/400 This chapter contains information to help you carry out the daily activities of BRMS/400. It begins with the BRMS/400 setup functions that most influence your day-to-day operations. Then, it outlines some of the basic tasks that operations carries out on a daily basis and finishes by looking at some aspects of job scheduling, using the save-while-active function with BRMS/400, and saving spooled files with BRMS/400. We recommend that you review the functional enhancements between various releases of BRMS/400. These are covered in Appendix A, “Summary of changes” on page 289. 4.1 BRMS/400 operational tasks The following tasks are some of the recommended daily tasks that you should perform when using BRMS/400 to ensure consistent operation. 4.1.1 Checking for media availability Use the Media Report (showing only expired volumes) as a selection list to choose tapes to be used for the saves. This report can be created as part of the maintenance procedure, or it can be created on any machine in the BRMS/400 network. For example, the following command produces a list of expired 3490 cartridges in creation date sequence: WRKMEDBRM TYPE(*EXP) MEDCLS(CART3490E) SORT(*CRT) OUTPUT(*PRINT) This report should be produced after the daily movement procedures are completed. With libraries, such as the 3494 Automated Tape Library Data Server, you depend on scratch media being inside the library when BRMS/400 requests it. If moves are being performed correctly, this should always be the case. However, sometimes cartridges are manually ejected, and the BRMS/400 records are not updated to reflect this move. When this happens, the Library Manager and BRMS/400 become out of synchronization. It is worth checking before each backup to see that the two databases agree. You can do this by running the WRKMEDBRM and WRKMLMBRM commands and comparing the outputs. However, this can be quite a task if you have a large library. See Appendix E, “Media missing from the 3494” on page 309, for a sample program and query that you can use to more easily highlight the differences. 4.1.2 Performing BRMS/400 backups You should perform BRMS/400 backups on each of your AS/400 systems. If you have a BRMS/400 network, you must perform backups on all of the systems in the network. For attended saves, you can use the Start Backup using BRM (STRBKUBRM) command and select the backup control groups that you want saved. For saves that are submitted to a job scheduler, check to ensure that the save job has been submitted and that it is in an active state. 58 Backup Recovery and Media Services for OS/400 If the Save-while-active parameter is being used, a message is sent when the library or libraries have reached a synchronization checkpoint. The Monitor Save While Active (MONSWABRM) command is used to take action when the synchronization point is reached. You should run this command through an *EXIT in the backup or archive control group. If you have a group of libraries with the *SYNCLIB parameter, you should code the first library as the LIB parameter on the MONSWABRM command. See 4.3, “Save-while-active and BRMS/400” on page 72, for more information. 4.1.3 Saving save files If you have processed any backups to save files, you must run the Save Save Files using BRM (SAVSAVFBRM) command with the appropriate control group. Be aware that when you use the SAVSAVFBRM command, BRMS/400 recovery data is not saved automatically, as with control groups. This is similar to performing saves using the SAVLIBBRM command and the SAVOBJBRM command. QA1AMM is updated during a SAVSAVFBRM to reflect the new media. Therefore, if you create a new recovery report at this point, it reflects the true location of the information because it is taken from the online QA1AMM in QUSRBRM. However, since SAVSAVFBRM did not save the recovery information, if you perform a recovery using BRMS/400, BRMS/400 prompts you for save files or for different tapes than those in your recovery report. That is because BRMS/400 is using the older version to recover. Also, if you did not create a new recovery report after you ran the SAVSAVFBRM command, your recovery report indicates that certain objects were in save files (now gone), and you have to find the objects on the tape. You must always run the SAVMEDIBRM command after you perform the SAVSAVFBRM command and produce a new recovery report. See 10.1, “Overview of BRMS/400 recovery” on page 191, for additional information. 4.1.4 Performing daily checks The following tasks should be included in the daily operations procedures: • Log: The BRMS/400 log shows all BRMS/400 activity and is the central logging point for all BRMS/400 related messages. Use the DSPLOGBRM command to display a copy of the BRMS/400 job log. Check daily on each system that: – All save activity completed successfully on each scheduled control group. – There are no unusual errors or messages. – Maintenance has completed successfully. Note: It is vital that any unusual entries observed, especially unsaved BRMS/400 recovery objects, are investigated. • Maintenance: BRMS/400 maintenance performs all BRMS/400 housekeeping activities. It should be run on each system in the BRMS/400 network after the individual save processes complete. There should be a manual check every morning for the message BRMS/400 maintenance procedure completed in the BRMS/400 job log. This message indicates that the BRMS/400 maintenance run (STRMNTBRM) completed successfully. Chapter 4. Managing BRMS/400 59 BRMS/400 maintenance job performs various clean up tasks and produces important reports based on your media information. The tasks that are performed by this single command are: – Journal receivers are cleaned up. BRMS/400 journals are changed, and new ones are attached. The old journal receivers are deleted based on the information in the Q1APRM data area. The default is to keep the information for five days. It is important to know that BRMS/400 implements journaling and commitment control to ensure data integrity and that the files are always at a transaction boundary. – The EXPMEDBRM command is processed to expire any media. – History records are removed for expired media. BRMS/400 re-uses deleted records in the physical files so you do not have to schedule to run the Reorganize Physical File (RGZPFM) command. – A media synchronization audit is performed to ensure that the media files on all BRMS/400 systems in the network are at the same level. – Media movement is performed (if requested). – Volume error statistics are collected, and the volume error logs are updated. – A report on expired media is produced using the WRKMEDBRM command. – A report on backup activity report is produced using the WRKMEDIBRM command. – Library analysis is run to determine which libraries were not saved. – Recovery analysis report is produced for all locations. – A report on recovery activity (contact information) is produced. – Various work files are cleaned up such as DLOs that might have been left over by the spooled file backup. – Media inventory registration is reconciled. – Disk storage space is freed for any archived objects that were retrieved. Beginning with V3R6, you can specify the number of days you want to keep the object on the system after you have retrieved it. If the object has not been updated for this period, the maintenance job performs a save to a temporary file using the STG(*FREE) option. If the object has been changed, you have to archive it again. If the BRMS/400 maintenance task is not started or executed through using an exit program in the control group, you must start it manually by issuing the BRMS/400 STRMNTBRM command. • Reports: Check for: – Centralized Media Audit Report (on each system): This is automatically produced as part of the STRMNTBRM command for systems that are in a network. It is not produced when you are in a single system environment. You should understand why any errors are found and what updates BRMS/400 has made to correct them. – Backup Activity Report: This is automatically produced by the BRMS/400 maintenance task. You should look for errors in save operations. You should look under the Not Saved report column to identify the objects or libraries that were not saved and then take the appropriate actions. 60 Backup Recovery and Media Services for OS/400 – Save Strategy Exceptions Report: This report is automatically produced when the BRMS/400 maintenance task has completed. You should review the libraries that are not saved with their owners to ensure that an appropriate save strategy is in place for those libraries. You can add the libraries to the appropriate backup control group. If you have some libraries that are shown in the report as not saved but are already in the backup control group, investigate why the control group has not saved these libraries. You can also gather information about libraries that are not being saved by running the WRKMEDIBRM SAVTYPE(*NONE) command. If you do this online, remember to page up and look at all entries in the list. – Tape Volume Report, Volume Threshold Report, and the Volume Statistics Report: These reports are automatically produced as part of a BRMS/400 maintenance run. They can also be produced using the Print Media Exceptions for BRM (PRTMEDBRM) command. The reports show volumes that equal or exceed the usage or read/write threshold limits set for the media class. You should check these error thresholds and take the appropriate action to replace volumes with errors. You can do this with the Duplicate Media using BRM (DUPMEDBRM) command by using the following technique: 1. Attempt to recover data using the DUPMEDBRM command. 2. Perform a manual move of the volume in error (for example, to a location called DISPOSED). You should also check the number of free media volumes in each of the media classes. Use the WRKMEDBRM command as in producing the preceding media picking list. Or, with V3R2 and V3R6 and V3R7, you can use the CHKEXPBRM command: 1. Enroll or order new tapes if necessary. 2. Expire old tapes if necessary. 4.1.5 Moving media Moving media correctly is important. Apart from knowing exactly where your information is, it is vital to ensure that recovery data is moved to a secure location and that there is sufficient media in your scratch pool for backup or archive. The rules for moving media are defined in the move policy. The instructions for moving media are produced when the Move Media using BRM (MOVMEDBRM) command is performed, either as part of maintenance or on its own. Each time media movement is run, BRMS/400 calculates in which location the media should be (according to the move policy), checks the location where it actually is, and if the two are different, issues a move request to move the media to the correct location. Media can be moved using option 8 on the Work with Media using BRM (WRKMEDBRM) command. However, if the media is under control of a move policy, the next time the MOVMEDBRM command is run, an instruction may be issued to move it back. This sort of situation can occur if media is retrieved from another location to restore objects from it. Chapter 4. Managing BRMS/400 61 If you want to retain the media and not return it, you must break the link with the move policy. You can do this from the Change Media using BRM (CHGMEDBRM) command and entering *NONE in the Move Policy field. If you are confident that media moves scheduled by the MOVMEDBRM command are always physically carried out, you can choose to have the media location updated when the MOVMEDBRM command is run. However, we recommend that you choose the option to Verify Media Movement before you update the BRMS/400 records. This can be done by running the Verify Moves using BRM (VFYMOVBRM) command and confirming that the media has actually been moved. If you have a media library, running the MOVMEDBRM command causes the RMVTAPCTG command to be issued to the library to eject the cartridge. Depending on the library type, and whether the system is a CISC or a RISC system, this may physically eject the cartridge or merely change its category to *EJECT. If you prefer the RMVTAPCTG action to be issued during the VFYMOVBRM command, rather than during the MOVMEDBRM command, change byte 210 in the Q1APRM data area to '1' using the following command: CHGDTAARA DTAARA(QUSRBRM/Q1APRM (210 1)) VALUE('1') BRMS/400 is shipped with this value set to blank. To find out which value you are currently using, use the command: DSPDTAARA DTAARA(QUSRBRM/Q1APRM) This data area has no effect when volumes are inserted. Using the data area and Verify Moves *YES/*NO provides four setups: • Q1APRM blank, verify moves *NO: Volumes that are scheduled to leave the MLB are ejected when the MOVMEDBRM command is run and they are “moved” in the BRMS/400 database. Volumes that are scheduled to return need to be physically placed into the library prior to the move being run. Once inserted, they have a category of *INSERT and when MOVMEDBRM is run, they are changed to *NOSHARE or *SHARE400 depending on the value in the Shared media parameter on the media class. • Q1APRM blank, verify moves *YES: Volumes that are scheduled to leave the MLB are ejected when the MOVMEDBRM command is run. However, they are not moved in the BRMS/400 database until the Verify Moves using BRM (VFYMOVBRM) command is run. For volumes that are scheduled to return, the MOVMEDBRM command is run first. The volumes need to be physically placed into the library. Once inserted, they have a category of *INSERT. When the VFYMOVBRM command is run, they are changed to *NOSHARE or *SHARE400 depending on the value in the Shared media parameter on the media class. • Q1APRM '1', verify moves *NO: This setup operates in exactly the same way as the first setup in this list. • Q1APRM '1', verify moves *YES: The MOVMEDBRM command is run first, which sets the volumes that are scheduled to leave the MLB up for verification. When the VFYMOVBRM command is run, the volumes are ejected and moved in the BRMS/400 database. For volumes that are 62 Backup Recovery and Media Services for OS/400 scheduled to return, the MOVMEDBRM command is run first. The volumes need to be physically placed into the library. Once they are inserted, they have a category of *INSERT. When the VFYMOVBRM command is run, they are changed to *NOSHARE or *SHARE400 depending on the value in the Shared media parameter on the media class. The MOVMEDBRM command can be run on any system in a network, and the resulting database updates are propagated around the network. It is clearly not desirable to have all systems moving media for all systems so either movement is run on each system for that system's media only, or movement is run on one system in the network for all systems. We recommend that you run the Move Media using BRM (MOVMEDBRM) command separately on each system. This is accomplished by specifying *LCL in the SYSNAME parameter of the MOVMEDBRM command. You can run the MOVMEDBRM command on a “central” BRMS/400 system for all systems. This is a practical solution for many enterprises. However, if you have more than one tape library, and they are attached to different systems, you must run the command separately. The reason for this is that although the MOVMEDBRM command updates the BRMS/400 files for all systems, the associated RMVTAPCTG command only ejects cartridges on the library attached to the “central” system. The operations tasks associated with moving media include printing reports, physically moving the media, and if required, verifying that the media has been moved. These tasks may be summarized as follows: • Reports: Prior to performing any physical tape movement, direct the required reports (some of which may have been produced earlier) to an appropriate output queue and print them. In a networked environment where the individual processes can be scheduled and controlled as a single procedure, the controlling job should distribute the Recovery Volume Summary Report and the Disaster Recovery Report directly to the central system for printing. This is achieved using the Send Network Spooled File (SNDNETSPLF) command over SNA distribution services (SNADS). This way, each system also keeps copies of the two recovery reports for possible reference during an emergency. REPORT NAME CONTEXT SPLF NAME PRODUCED BY Media Location report CENTRAL QP1AMM WRKMEDBRM Media Movement Report CENTRAL QP1APVMS PRTMOVBRM Recovery Volume Summary Report UNIQUE QP1A2RCY STRMNTBRM Disaster Recovery Report UNIQUE QP1ARCY STRMNTBRM On the “central” system, print the Media Movement Reports using the following command: PRTMOVBRM PERIOD(*CURRENT) TYPE(*VFY) This command is used to print the report for the current day's media movements. If you cycle media off-site, you probably want the expired media to be returned at the same time that the current media is being collected. Use the following command to print the Media Movement Report: Chapter 4. Managing BRMS/400 63 PRTMOVBRM PERIOD(*BEGIN dddddd) TYPE(*NEXT) LOC(OFFSITE) Here, dddddd is the next day's date. • Move: Once the preceding reports are printed, the media can be physically moved. The Media Movement Report indicates which tapes should be moved. Note: It is essential that the Recovery Analysis Report and the Volume Summary sReport for each BRMS/400 system are sent to the remote site (for example, “VAULT”) with the tapes. • Verify: Verify Media Movement to confirm to BRMS/400 that all pending movements have been performed by the operator. The VFYMOVBRM command should be run on the “central” system. From the list of media pending movement, enter option 1 next to those media volumes that are physically being moved. 4.1.6 Media management Ensuring that there are enough expired media of the required type in the required location to complete a save is one of the prime tasks of operations. For media libraries, such as the 3494 Automated Tape Library Data Server or where the home location is convenient to the tape drive, this is a question of having sufficient quantities of usable media. Where the media is stored elsewhere (for example, in a fireproof safe or off-site), it is also a question of selecting and moving the media. In V3R6, V3R7, and V3R2, two new parameters on the Media Policy influence media management. The Required volumes parameter ensures that the save does not start if there are fewer media available than indicated. The Mark volumes for duplication parameter causes media to be duplicated when the DUPMEDBRM command is run with the VOL(SEARCH) option. To be certain that you have sufficient media, the value can also be checked by user jobs using the Check Expired Media for BRM (CHKEXPBRM) command. For example, the CHKEXPBRM command can be incorporated into a job scheduler to determine, at various times, if there are enough expired media volumes available for a save operation. Figure 39 shows how the CHKEXPBRM command checks for a specific number of volumes. Figure 39. Checking for expired media If sufficient volumes are available, the display shown in Figure 40 on page 64 appears. Check Expired Media for BRM (CHKEXPBRM) Type choices, press Enter. Required volumes . . . . . . . . > 5 1-9999, *MEDPCY Media class . . . . . . . . . . > QIC120 NETCHK, QIC1000, QIC120... Location . . . . . . . . . . . . *ANY *ANY, *HOME, COMPROOM... 64 Backup Recovery and Media Services for OS/400 Figure 40. The message indicating that the request was successful Although you should always monitor for the availability of media volumes, there may be times when additional volumes need to be introduced to complete the save. Automatic enrollment of media allows you to automatically add new media used in output operations to the media inventory if the request has been done using a BRMS media class and is on this device. To enable this function, set the Auto enroll media parameter to *SYSPCY or *YES in the BRMS/400 device description. If you are enabling this globally, you should set the Auto enroll media parameter in the system policy to *YES. You should also ensure that you have enough licenses to allow for any additional media. Note: If you are using a media library, such as the 3494 Automated Tape Library Data Server, automatic enrollment of media during a save operation does not occur because BRMS/400 has to specify a volume to be mounted. 4.1.7 Daily housekeeping You should perform the following tasks on a daily basis: • All of the reports that were printed should be filed. • Check all of the BRMS/400 spooled files and delete any that are older than the specified retention period. • If you implemented BRMS/400 archiving, use the Start Archive using BRM (STRARCBRM) command to produce the Archive Candidate Report. This should be repeated for each archive control group. • Use the Add Media to BRM (ADDMEDBRM) command or the Add Media Library Media to BRM (ADDMLMBRM) command to enroll and initialize new media that you may have. 4.2 Setting up your own control groups Although BRMS/400 provides default control groups to backup and restore your entire system, it is probable that you will define your own control groups. One reason is to provide the flexibility to start and stop various subsystems, hold job queues, save spooled files, or even use the save-while-active function to perform some of your backups. Another reason is to satisfy requirements to perform different tasks at different times (daily, weekly, at period end, and so on). Additional Message Information Message ID . . . . . . : BRM1933 Severity . . . . . . . : 00 Message type . . . . . : Completion Date sent . . . . . . : 06/14/00 Time sent . . . . . . : 20:14:26 Message . . . . : Request for 5 expired volumes successful. Cause . . . . . : The check expired media command requested 5 volumes. 24 expired volumes are available for media class QIC120 at location *ANY. Chapter 4. Managing BRMS/400 65 Depending on your recovery plans, you may need to recover a critical application and resume processing before you recover the remainder of your system. You can easily separate the application from your other backups using control groups. We recommend that you do not change the default BRMS/400 control groups. You should first copy them and change the new control groups, rather than making changes to the original control groups. 4.2.1 Considerations for libraries that affect BRMS/400 When setting up a backup control group, you should carefully plan how you will save your BRMS and other critical libraries. Besides the BRMS/400 libraries QBRM and QUSRBRM, if you have a 3494 Automated Tape Library Data Server installed on CISC-based AS/400 systems, you have QMLD and QUSRMLD. QMLD contains commands and programs; QUSRMLD contains user system configuration. The QUSRSYS library also affects your BRMS/400 save operation when you are using a 3494 Automated Tape Library Data Server. This library contains three important files that are using during a save operation: • QATADEV contains a list of automated tape libraries. • QATAMID contains a list of volume identifiers used during a save operation. • QATACFG contains a list of media categories. There are also logical files and out files used for communications to the 3494 Automated Tape Library Data Server. When planning to save libraries QUSRSYS and QUSRBRM, it is extremely important to understand the implications of the seize locks when saving in a non-restricted state. For example, assume that you are saving library QUSRSYS to a volume that is already mounted. The system is unable to save all the data on the mounted tape and requires another volume to be mounted. Because the QUSRSYS is locked, the save operation is unable to read and update the required files. The save is in a deadlock condition and fails with a message identifier of CPA37A0. To minimize the chances of spanning QUSRSYS and QUSRBRM across multiple volumes and to avoid lock conflicts, we strongly recommend that you create a separate control group to save BRMS/400 data before you save *ALLUSR data. You must ensure that these libraries are omitted from the backup policy; otherwise, you save them twice. These recommendations assume that you can fit QUSRSYS and QUSRBRM libraries in the mounted volume and that you are performing the save operation in a non-restricted state. See Appendix D, “Performing restricted saves to a 3494 on CISC” on page 305, for an example of saving them in a restricted state. 4.2.2 Control group to save QGPL, QUSRSYS, and QUSRBRM Figure 41 on page 66 contains a sample backup control group based on a V3R2 AS/400 system to perform a weekly save of all user data to a media library. The example is for saving all user data from the system, including security information, configuration information, document library objects, and directory information from the integrated file system. Prior to starting the backup, ensure that you have varied off the Integrated PC Server (FSIOP), if you have one installed. 66 Backup Recovery and Media Services for OS/400 For setting up a control group to save IBM data, see 4.2.5, “Control group to save QMLD and QUSRMLD” on page 68. Use the WRKCTLGBRM command to create a backup control group called WKLIBM09 as shown in Figure 41. Figure 41. Sample backup control group When performing saves using *ALLUSR, or *ALLPROD, ensure that you understand which “Q” libraries are saved. See Table 2 on page 40 for more information. It is also important to ensure that you omit libraries that are saved as part of *ALLUSR, when you are planning to save them outside the *ALLUSR control group entry. In the example in Figure 41, notice that sequence number 100 is added to save spooled files. In our example, we used a backup list called SAVEOUTQ that contains a list of output queues that are specified as sequence numbers, which is the same as the exit programs. You can have multiple output queues within one backup list item. For additional details on how to use BRMS/400 for spooled file saves, see 4.4, “Saving spooled files using BRMS/400” on page 84. Display Backup Control Group Entries SYSTEM09 Group . . . . . . . . . . : WKLIBM09 Default activity . . . . : *BKUPCY Text . . . . . . . . . . : M09: Weekly Save Control Group Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 QGPL *DFTACT *NO *SYNCLIB *LIB 20 QUSRSYS *DFTACT *NO *SYNCLIB *LIB 30 QUSRBRM *DFTACT *NO *SYNCLIB *LIB 40 *SAVSECDTA *DFTACT *NO 50 *SAVCFG *DFTACT *NO 60 *IBM *DFTACT *NO *NO 70 *ALLUSR *DFTACT *NO *NO 80 *ALLDLO *DFTACT *NO *NO 90 LINKLIST *LNK *DFTACT *NO *NO 100 SAVEOUTQ *SPL *DFTACT 110 *EXIT *DFTACT 120 *EXIT *DFTACT 130 *EXIT *DFTACT 140 *EXIT *DFTACT 150 *EXIT *DFTACT Chapter 4. Managing BRMS/400 67 4.2.3 User exits and control groups You can create a backup control group entry of *EXIT to perform user command processing. Select F10 (Change item) for each *EXIT to go into the User Exit Maintenance display, and enter the command you want to process. This can be done on the Create Backup Control Group Entries display, or you can go back afterwards and change the item on the Edit Backup Control Group Entries display. Figure 42 is provided for user exit in sequence 110. Figure 42. User Exit Maintenance Tab down to each user exit sequence number that you have specified, and use F10 to assign a command that you want to process. The commands that are executed by the remaining sequence numbers in our WKLIBM09 backup control group are: Seq No. Command ------- ------------------------------------------------------ 120 SBMJOB CMD(STRMNTBRM) JOB(STRMNTBRM) JOBQ(BRMSJOBQ) OUTQ(BRMSOUTQ) 130 SBMJOB CMD(DSPLOGBRM OUTPUT(*PRINT)) JOB(DSPLOGBRM) JOBQ(BRMSJOBQ) OUTQ(BRMSOUTQ) 140 SBMJOB CMD(PRTMEDBRM VOL(*EXCP)) JOB(PRTMEDBRM) JOBQ(PRTMEDBRM) OUTQ(BRMSOUTQ) The above example includes various exits that make up a series of commands that are executed after the saves have completed. Rather than specifying a series of user exits, you may want to create a simple CL program that includes all of the commands that you want to process and include this CL program as a single user exit. It is extremely important that you understand the recovery implications when setting up your own backup control groups. For example, say that you are planning to perform an *ALLUSR save in your control group. Before you perform this *ALLUSR save, you want to save libraries QGPL, QUSRSYS, and QUSRBRM ahead of other libraries. You have set up the backup control group as shown in Figure 41, and you have defined the libraries to omit in your backup policy. If you plan to perform the restore using BRMS/400, the restore is seamless. However, if you plan to perform the restore operation outside BRMS/400, such as using the OS/400 native RSTLIB LIB(*ALLUSR) command, you do not recover libraries QGPL, QUSRSYS, and QUSRBRM. These libraries were saved separately so you must restore them separately. Important User Exit Maintenance SYSTEM09 Type command, press Enter. Sequence number . . . . . . . : 110 Where used . . . . . . . . . : *EXIT Weekly activity . . . . . . . : *DFTACT SMTWTFS Command . . . . . . . . . . . . SNDMSG MSG('Weekly Backups are Complete') TOUSER(*SYSOPR) 68 Backup Recovery and Media Services for OS/400 Note: There is no command for *EXIT 150. This exit is used to perform some post-processing tasks before the control group has finished processing. This means that the subsystems are restarted and the job queues are released. The subsystems that require ending and the job queues that require holding are set up as part of the control group set up. In our example, we use *EXIT 120, *EXIT 130, and *EXIT 140 to submit jobs to the BRMSJOBQ job queue. Since the last executable entry is *EXIT 140, and we want this to be processed within the control group, we have to add an extra *EXIT to allow for post-processing. The same as a post-processing exit, you can also add a preprocessing exit. See Figure 45 on page 76 for an example. 4.2.4 Omitting libraries from a control group If you need to back up all libraries with the exception of one or two, it is more convenient to specify *ALLUSR or *IBM and omit the libraries, rather than to specify all of the required libraries individually. For example, if you have a 3494 Automated Tape Library Data Server installed on a CISC system, you probably have made a special provision for backing up critical QMLD and QUSRMLD libraries together with the QGPL, QUSRSYS, and QUSRBRM libraries. See 4.2.5, “Control group to save QMLD and QUSRMLD” on page 68, and Appendix D, “Performing restricted saves to a 3494 on CISC” on page 305, for more information on this. It is not necessary to back them up a second time as part of *ALLUSR, so they should be omitted. Use F10 from the Work with Backup Control Groups display to go directly into Work with Libraries to Omit from Backups display shown in Figure 43. Figure 43. Omitting libraries from backups 4.2.5 Control group to save QMLD and QUSRMLD On CISC systems, the 3494 Automated Tape Library Data Server still requires MLDD software. Appendix A, “Summary of changes” on page 289, suggests a way to back up the QMLD and QUSRMLD libraries when the system is in a restricted state. You may want to back up these libraries at other times. We strongly recommend that you do not save them with the *IBM backup list. IBM Informational APAR (II08968) contains information on the possibility of the save job failing due to the loss of a communications link between the AS/400 system and the 3494. You can access the Informational APAR through the home page at: http://as400service.rochester.ibm.com/ Work with Libraries to Omit from Backups SYSTEM09 Type options, press Enter. 1=Add 4=Remove Opt Type Library *ALLUSR QGPL *ALLUSR QUSRBRM *ALLUSR QUSRSYS *IBM QMLD *IBM QUSRMLD Chapter 4. Managing BRMS/400 69 The recommended steps are as follows: 1. Omit the QMLD and QUSRMLD libraries from the backup policy as shown in Figure 43. 2. Add the libraries to the backup control group that does the *SAVSYS as follows: Weekly Retain Save Backup List Activity Object While Seq Items Type SMTWTFS Detail Active ____ __________ ____ ________ ________ ______ 10 *SAVSYS *DFTACT 20 QMLD *DFTACT *NO *NO 30 QUSRMLD *DFTACT *NO *NO 40 *EXIT *DFTACT The *EXIT does not have anything in it. It is there in case you want to specify other libraries after QUSRMLD. In that case, the *EXIT causes BRMS/400 to start a new SAVLIB, and locks on key files are minimized. When *SAVSYS has completed, BRMS/400 starts QMLDSBS. This may cause some object locks in QMLD and QUSRMLD, but the significant objects are saved. See Appendix D, “Performing restricted saves to a 3494 on CISC” on page 305, for information on how to save using the 3494 Automated Tape Library Data Server while the system is in a restricted state. If the BRM commands are used in a CL program, use the following commands (use the DEV and MEDPCY parameters as appropriate): SAVSYSBRM DEV(xxxxx) MEDPCY(xxxxxx) STRCTLSBS(*NO) STRCTLSBS(*NO) SAVLIBBRM LIB(QMLD QUSRMLD) DEV(XXXXX) MEDPCY(XXXXX) SEQNBR(*END) 4.2.6 Backup control group attributes After creating a backup control group, you should always set the attributes for the control group by selecting option 8 for the control group name. The backup control group attributes allow you to add backup information (for example, media policies and devices to use) and to override the default backup policy settings based on your overall backup and recovery strategy. Some of the attributes require careful planning before they are changed either in the backup control group attributes or in the backup policy. The key attributes are: • Media Policy: Enter here the media policies, full and incremental, that you want to use for this control group. • Backup devices: Backup devices specify the name of the backup device you want to use for this control group. You can specify up to four backup devices. If more than one device is specified, they must have the same characteristics. This feature is less widely used now that tapes are written in both directions (no rewind time) and when successive tapes can be automatically loaded by a tape drive in sequential or random mode, or by an automated tape library. The *MEDCLS special value specifies that any available device that supports the media class specified in the media policy may be selected. BRMS/400 searches for a device alphabetically according to the BRMS/400 Device Table. • Sign-off interactive users: This is useful to advise users that a backup is about to take place and to sign them off. You can specify exceptions to this, either devices or users, in the system policy. Messages can be issued at five 70 Backup Recovery and Media Services for OS/400 minute intervals to warn the users. However, there is no check if users sign back on again. If this is likely to be a problem, you should consider stopping subsystems. • Automatically backup media information: BRMS/400 records media information when objects are saved to volumes or save files. You can control how much media information BRMS/400 records when objects are saved, as well as how much media information is saved. Your decision here has an impact on the performance of your save operation and the amount of media used to process the backup. Additionally, the amount of recovery data recorded during backup affects at what level of detail (library or object) you can ask BRMS/400 to prompt for recovery. The default is to save library-level (*LIB) information after every backup operation using that policy or control group. The other alternatives are *OBJ, which retain object-level detail, or *NONE, which does not save any information for recovery purposes. The value of *NONE should be used with caution. If you are performing multiple saves, you may not want to save the recovery information after every save, but save it once at the end. Alternatively, if you keep a large database of recovery information, you may not want to save this after every single file or object save. Caution is advised because your recovery may be compromised until you save the recovery information. With object-level information, you can also retain member (*MBR) information for members associated with *FILE type objects. We recommend that you be selective with retaining object-level information because it increases your disk storage considerably and affects your save and restore times. Unless you are constantly restoring individual objects from a library, there is no need to keep object-level information. Remember that you can always restore an individual object even without keeping object-level information as long as you know the library in which the object was stored. You can search your save history for the library using the Work with Media Information (WRKMEDIBRM) command. You select the library you want to restore. Then, on the Select Recovery Items display, you select the option to restore specific objects (option 7) rather than selecting the entire library (option 1). If you select the *OBJ parameter for the Automatically backup media information field, you should ensure that you are saving objects at the object level in your control groups. To verify whether you are saving at the object level, go to the Edit Backup Control Group display for the control group, and review the Retain object detail field for each backup item. Those backup items that show *YES, *OBJ, or *MBR in the Retain object detail field keep object detail. Additionally, those items that do not display a Retain object detail field indicate that object-level detail is automatically kept. Note Chapter 4. Managing BRMS/400 71 BRMS/400 recovery information consists of multiple files that are appended at the end of your last tape volume or to a save file associated with the save files containing your saved data. The files required for library-level information are: QA1A1DV Device record: By type QA1A1MD MLB Device record: By location QA1ACN Container status QA1ADV Device record: By name QA1AHS Save history (library level) QA1ALR Save history: Save statistics by library QA1AMD MLB Device record: By name QA1AMM Media status QA1ASP System policy QA1AMT Media class attributes QA1AOQ Backup spooled file entries The following files are also saved if you save object-level information: QA1ADI IFS directory information QA1ALI IFS object link information QA1AOD Object detail See the Start Maintenance for BRM (STRMNTBRM) command parameters in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for a discussion on using BRMS/400 maintenance to remove object-level detail while retaining library-level detail. • Append to media: You may want to use APPEND(*YES), particularly with 3590 cartridges. BRMS/400 normally chooses an expired volume for output operations, unless you specify APPEND(*YES) on the backup policy or control group attributes. When selecting an active volume for APPEND(*YES), BRMS/400 tries to choose a volume with the same media class, same expiration date, same system name, same move policy, and same expiration date. If no volume is available that matches these criteria, BRMS looks for a volume with the earliest expiration date. This selection method ensures that the oldest volumes are chosen for appending. See 3.13.1, “Appending to media rules” on page 44, for information on append selection rules. In some cases, this selection method is appropriate, but if the customer wants to continue filling the last used volume, this does not work. For example, if you are doing a non-cumulative (incremental) save, you may want all of the incremental saves to be on the same volume to minimize reloading volumes when restoring. Other customers may want to alternate the volumes, so that, for example, a full save is done on Sunday. Monday, Wednesday, and Friday saves are incremental and should be to a different tape. Tuesday and Thursday saves are also incremental, but should be to another different volume, ensuring that if one of the incremental volumes is lost or damaged, a maximum of one day's backups are lost. If your library spans multiple save volumes, you must mount the first volume even though you may know that the actual object is, for example, in the third volume. This is an OS/400 limitation, and BRMS/400 uses the same underlying code for save and restore operations as the native OS/400 commands. Note 72 Backup Recovery and Media Services for OS/400 There may also be a requirement to manually control whether a volume is appended, similar to the way in which you can manually expire and move a volume. The current rules of expiring when the last file (library) on the tape expire mean that you progressively create empty space at the beginning of a volume. This is unusable space until the volume is expired. • Text: When you are adding a backup control group, you can assign text to describe the control group. The text that you specify is assigned to media created as a result of saving the control group. This can be extremely useful. For example, if you prompt on the WRKMEDBRM command, you can enter text related to the media with which you want to work. You can search for any string of characters, and only those media inventory entries that contain the string of characters in the text are included in the display or print. There can be instances where you want to preserve the text assigned to the volume name. In this situation, blank out the text field for the control group. This indicates to BRMS/400 that you want to preserve the text currently associated with the volume in the media inventory and not use the control group text. There can also be cases where you do not want text from the backup control group or the current text assigned to the volume. In this case, specify *NONE as the text for this control group. Media that is created as a result of saving this control group has *NONE as the descriptive text. 4.3 Save-while-active and BRMS/400 The save-while-active function allows you to modify objects while they are being saved. It is possible to save while active without stopping the users. However, this type of usage requires you to implement commitment control to ensure that save and restore operations are always at a transaction boundary. If your application does not use journaling or commitment control, and for ease of recovery, we recommend that you shut down your application until a save-while-active synchronization (also known as checkpoint) is reached. Once synchronization is reached, the system releases the exclusive locks on the library you are saving, and users can resume normal activity. The system continues to save the data to a tape device as a background task. This is where you benefit most from the save-while-active function. The data in your library can be used by your users without having to wait until the entire library is saved on a tape device. The gain is the time it takes to write your data to the tape device from the point of reaching synchronization. In general, if you have large libraries with single member physical files, the time to establish the checkpoint can be small compared to the time to write to the tape. For example, assume that the entire save takes one hour at present, and the library contains single member physical files. Without the save-while-active function, the entire library is locked for one hour and users are not allowed to use any file in that library until the save is complete. With the save-while-active function, you may find that the checkpoint is established within 20 minutes, for example. You can monitor for the checkpoint message and allow users to continue using the files in the library. This increases your application availability by 40 minutes. Chapter 4. Managing BRMS/400 73 Backup and Recovery - Advanced, SC41-4305, contains a detailed explanation on the save-while-active function. It also includes information on performance considerations, object locks, and the limitations of the save-while-active function. We strongly recommend that you review this book before you implement the save-while-active functions in BRMS/400. 4.3.1 Save-while-active implementation in BRMS/400 Within BRMS/400, the save-while-active function is implemented through the backup control groups. BRMS/400 also provides the Monitor Save While Active for BRM (MONSWABRM) command that can be used through an exit in the control group. This command monitors for checkpoint messages and allows you to process another command, once the checkpoint message has been monitored. For example, you can restart a subsystem or an application or send a message to your users indicating that activity related to the application can be restarted. See 4.3.3, “Using the MONSWABRM command” on page 75, for more information. Figure 44 shows an example of creating an *EXIT on the Edit Backup Control Group Entries display. Figure 44. Edit Backup Control Group Entries display: Creating an *EXIT The numbers in revers bold in Figure 44 are explained here: 1 Beginning with V3R6 and V3R2, the control group contains a new SWA Message Queue field as shown in Figure 44. This function is not available in V3R1. With the SWA Message Queue value in the control group, you can specify the name of the message queue where you want the checkpoint messages to go. By default, the value is set to *LIB, which means the messages are sent to the message queue of the library name specified in the sequence number of the control group. In later examples, we discuss the implications of using *LIB or a message queue name for this value. 2 Sequence number 10 in the control group is used for any preprocessing that needs to be done before starting the control activities. For example, you may want to ensure that the subsystems defined in the control group have ended, job queues have been held, or users have signed off prior to starting the next Edit Backup Control Group Entries Group . . . . . . . . . . : SWA Default activity . . . . . *BKUPCY Text . . . . . . . . . . . Save While Active Control Group Type information, press Enter. Weekly Retain Save SWA 1 Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *EXIT *DFTACT 2 20 *EXIT *DFTACT 3 30 LIBA FFFFFFF *NO *SYNCLIB *LIB 40 LIBB FFFFFFF *NO *SYNCLIB *LIB 50 *EXIT 74 Backup Recovery and Media Services for OS/400 exit. In our example, the next exit is the MONSWABRM command. The advantage here is that the MONSWABRM command does not lose any time from its default 60 minutes for the preprocessing tasks to complete. 3 This exit is used for MONSWABRM command processing as part of your backup control group. See Figure 45 on page 76 for additional information. Refer to 4.3.5, “Examples of using save while active with BRMS/400” on page 78, for more information. This section addresses several examples of using the save-while-active function with BRMS/400, including the use of the MONSWABRM command. 4.3.2 Save-while-active parameters For each backup item in a control group, you may elect to save them while active. See Figure 44 on page 73 for an example of this option. There are a number of alternatives that are described in the help text. The possible values are: • *NO: Objects that are in use are not saved. Objects cannot be updated while they are being saved. Data integrity is preserved with maximum save performance. • *YES: Document library objects can be changed during the save request. Objects that are in use but are not using application recovery are not saved. See Backup and Recovery - Advanced, SC41-4305, for more information on DLOs, saving while an object is in use, and application recovery. If you use *YES with a non-document library object, *YES functions the same as the *LIB parameter. • *LIB: Objects in a library can be saved while they are in use by another job. All of the objects in a library reach a checkpoint together and are saved in a consistent state in relationship to each other. If multiple libraries are specified on the backup control group, the checkpoint processing is performed individually for the objects within each specified library. For example, if you are planning to save LIBA and LIBB, the system performs two separate SAVLIB commands and establishes two checkpoints. • *SYNCLIB: Objects in a library can be saved while they are in use by another job. All of the objects and all of the libraries specified within a backup control group reach a checkpoint together and are saved in a consistent state in relationship to each other. If you use *SYNCLIB for saves within a BRMS/400 control group, and the media policy specifies that the saves are to be done to save files, you need to understand the following points: – When saving to save files, OS/400 restricts you to save a single library to save files. BRMS/400 adopts the same restrictions. – The control group uses *LIB level synchronization instead of *SYNCLIB. Only physical files with members have the same save active date (and time) time stamp. Libraries with thousands of objects may be too large for this option. Note Chapter 4. Managing BRMS/400 75 – If you are using the MONSWABRM command to monitor for save-while-active messages, you receive one message from the first library that is saved. After this, the MONSWABRM command is ended. – If you specify a message queue in the SWA Message Queue field in the Edit Control Group Entries display, BRMS/400 sends the synchronization message for every library. Until a PTF for APAR SA61101 is available, the message queue must exist in the QUSRBRM library. – BRMS/400 completes the save processing without any warning or error messages. It does not warn you that the save process has adopted an *LIB level of synchronization. • *SYSDFN: Objects in a library can be saved while they are in use by another job. Objects in a library may reach checkpoints at different times and may not be in a consistent state in relationship to each other. If you are going to use the Monitor Save While Active for BRM (MONSWABRM) command to perform operations when a checkpoint has been reached, the *SYSDFN option may not be convenient to use. You cannot be sure which database network within a library has reached a checkpoint. This makes it difficult to release the library to users for normal work. Note: Specifying this value eliminates some size restrictions and can allow a library to be saved that cannot be saved with SAVACT(*LIB). However, there is a concern with the ability to recover to a known state. See Backup and Recovery - Advanced, SC41-4305, for additional information. 4.3.3 Using the MONSWABRM command The MONSWABRM command can be used through an *EXIT in your backup or archive control group. The MONSWABRM command monitors for system messages CPI3710 and CPI3712. These messages indicate that the libraries specified in your backup control group are synchronized. Figure 45 on page 76 shows you an example of how you can use the MONSWABRM command through an exit from the control group. Different items (libraries or backup lists) to be saved-while-active in your control group, interspersed special operations, such as *EXIT or *LOAD, or different activities have an effect on your save-while-active processing. See 4.3.4, “Synchronizing blocks of libraries” on page 76, for more information. Notes 76 Backup Recovery and Media Services for OS/400 Figure 45. User Exit Maintenance display: Completed MONSWABRM command You can use the LIB parameter to specify the message queue that you are monitoring for synchronization messages to arrive. You can also specify a value of *MSGQ, followed by specifying the name of the message queue in the MSGQ parameter. The *MSGQ value and the MSGQ parameter are not available in V3R1. You can use the CMD parameter to execute a command, once the synchronization message has arrived. In the preceding example, we chose to run the Start Subsystem using BRM (STRSBSBRM) command after synchronization occurred for the libraries we are saving. This makes it possible to quiesce an application only until synchronization has occurred. It also makes it available to end users while the save process continues writing data to tape. 4.3.4 Synchronizing blocks of libraries To synchronize a set of libraries together at the set level, rather than for every item in the control group, you must ensure that the libraries are listed in sequence without any special operations such as *EXIT or *LOAD. You must also ensure that the values for the Retain object detail, Weekly Activity, or the Save While Active fields are also the same for the list of libraries that you specified in your control group. BRMS/400 uses a single save command to process these libraries for identical fields in the control group. If you split the library by using special operations, such as an *EXIT or a *LOAD, BRMS/400 processes the sets separately as shown in Figure 46. User Exit Maintenance Type command, press Enter. Sequence number . . . . . . . : 20 Where used . . . . . . . . . : *EXIT Weekly activity . . . . . . . : *DFTACT SMTWTFS Command . . . . . . . . . . . . MONSWABRM LIB(LIBA) CMD(STRSBSBRM GROUP(SWA )) F4 Instead of the STRSBSBRM command, you can use the native STRSBS command, in which case you specify the name of the sybsystem to be started. The advantage of the STRSBSBRM command over STRSBS is that you do not need to remember which subsystems need to be restarted. BRMS/400 automatically restarts those subsystems that it had ended prior to starting the control group processing. These subsystems are specified as part of the control group setup. Note Chapter 4. Managing BRMS/400 77 Figure 46. Synchronizing multiple libraries with save while active In the example in Figure 46, libraries LIBA and LIBB are synchronized together. Libraries LIBC and LIBD are synchronized later. The *EXITs each perform a MONSWABRM command, which monitors for the synchronization point. LIBA is used for the first set, and LIBC is used for the second set for save-while-active synchronization point messages. One of the advantages of splitting the libraries into two sets is that it allows you to specify different weekly activity or retain object detail information for LIBA and LIBB compared to LIBC and LIBD. If you use generic names for the libraries, such as A*, B*, and C*, and you specify *SYNCLIB, BRMS/400 groups all of the libraries together and performs a single save operation. You receive a single synchronization message. A single save command supports up to 300 libraries to be entered as a list. This is an OS/400 restriction. If you have more than 300 libraries, BRMS/400 issues another save command to process the remaining libraries. Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *EXIT *DFTACT 20 LIBA FFFFFFF *NO *SYNCLIB *LIB 30 LIBB FFFFFFF *NO *SYNCLIB *LIB 40 *EXIT 50 LIBC *DFTACT *YES *SYNCLIB *LIB 60 LIBD *DFTACT *YES *SYNCLIB *LIB In this example, the SWA Message Queue value in the control group is left as *LIB. Because of this, it is important that you use the name of the first library in the LIB value for the MONSWABRM command. If you use a name other than the first library name, the MONSWABRM command cannot monitor for the save-while-active synchronization message. In the meantime, your control group has already finished processing, and you do not benefit by using the save-while-active message queue function. Important By default, the MONSWABRM command waits for 3600 seconds (one hour) for the synchronization message issued by the system. You must ensure that you increase the save-while-active wait time in the MONSWABRM command if your libraries require over one hour to reach synchronization. Remember that, in the release covered in this redbook, OS/400 has a restriction of up to 300 libraries that can be specified in the list of libraries to be saved. If your list of libraries is *ALLPROD or *ALLTEST, or if the number of generic libraries exceeds 300, BRMS/400 issues another save command to save the remaining libraries. Note 78 Backup Recovery and Media Services for OS/400 4.3.5 Examples of using save while active with BRMS/400 This section contains various examples of using the save-while-active function with BRMS/400. It also contains examples of using the MONSWABRM command. We assume that you are already familiar with how to set up control group entries and use exits within the control groups. 4.3.5.1 Example 1 This example is for V3R1 of BRMS/400. It does not contain the SWA Message Queue field on the control group, and the MONSWABRM command does not have the MSGQ parameter or *MSGQ value for the LIB parameter. Figure 47 shows you how to save all of the libraries specified within your backup control group with a single save command. The synchronization point is monitored by the MONSWABRM command. Figure 47. Save-while-active example 1 When you submit the save of the preceding control group, BRMS/400 first acquires a volume and begins control group processing. It submits the MONSWABRM job in QBATCH subsystem. The MONSWABRM command creates a message queue of LIBA 2 in library QUSRBRM and waits for the system to send a message when the synchronization point is established by the libraries in the control group. It waits for a default of one hour for a message to arrive. The job goes into a MSGW status. BRMS/400 checks the control group entries and sees that they are all identical for 1 *SYNCLIB processing. It builds a list of libraries to be submitted internally to the save process. In this example, a single synchronization point is established. The system sends the synchronization message to LIBA message queue. The MONSWABRM command receives this message queue and processes the command specified in the CMD value. The MONSWABRM command deletes the message queue it created in QUSRBRM and ends the job. When you perform a full save, BRMS/400 always uses the first library name as the message queue that receives the synchronization message. Therefore, it is important that you use the first library name in the 2 MONSWABRM command. Weekly Retain Save Backup List Activity Object While Seq Items Type SMTWTFS Detail Active ___ __________ ____ _______ ____ ____ 10 *EXIT ____ *DFTACT 20 *EXIT ____ *DFTACT 30 LIBA ____ *DFTACT *NO *SYNCLIB 1 30 LIBB ____ *DFTACT *NO *SYNCLIB 30 LIBC ____ *DFTACT *NO *SYNCLIB 30 LIBD ____ *DFTACT *NO *SYNCLIB MONSWABRM LIB(LIBA) CMD (STRSBSBRM CTLGRP(MONDAY01)) 2 Chapter 4. Managing BRMS/400 79 4.3.5.2 Example 2 This example shows you how to obtain synchronization messages for every library that you save in your control group. This example assumes that you are performing a full save. The MONSWABRM command is used for monitoring synchronization messages (Figure 48). Figure 48. Save-while-active example 2 The exits have the following settings for the MONSWABRM command: 20 - MONSWABRM LIB(ACCOUNTS) CMD(SNDMSG MSG('Account Libraries have been synchronized.') TOUSR(*SYSOPR)) 40 - MONSWABRM LIB(SALES) CMD(SNDMSG MSG('Sales Libraries have been synchronized.') TOUSR(*SYSOPR)) 60 - MONSWABRM LIB(PAYROLL) CMD(SNDMSG MSG('Payroll Libraries have been synchronized.') TOUSR(*SYSOPR)) 80 - MONSWABRM LIB(MFG) CMD(SNDMSG MSG('Manufacturing libraries have been synchronized.') TOUSR(*SYSOPR)) In this example, BRMS/400 issues four saves. Each save establishes a synchronization point at the *LIB level, and a message is sent to the message queue specified in the SWA Message Queue field in the control group. The MONSWABRM command creates a message queue in the QUSRBRM library, using the same name as the value you specified in the LIB parameter. This message queue waits to receive the synchronization message from OS/400. As soon as the message is received, the MONSWABRM command processes the command specified in the CMD parameter. It deletes the message queue that it created in library QUSRBRM and ends the job. The MONSWABRM waits for a default of one hour to receive the synchronization message from OS/400. If no messages are received, the command processing is ended. The MONSWABRM command also deletes any user created message queue in the QUSRBRM library that matches the message queue name specified in the LIB parameter. Important Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *EXIT *DFTACT 20 *EXIT *DFTACT 30 LIBA *DFTACT *NO *LIB ACCOUNTS 40 *EXIT *DFTACT 50 LIBB *DFTACT *NO *LIB SALES 60 *EXIT *DFTACT 70 LIBC *DFTACT *NO *LIB PAYROLL 80 *EXIT *DFTACT 90 LIBD *DFTACT *NO *LIB MFG 80 Backup Recovery and Media Services for OS/400 In this example, we specified the name of the message queue in the SWA Message Queue value rather than using the default of *LIB. The message queue name specified in the control group entry must match the message queue name in the LIB parameter of the MONSWABRM command. The MONSWABRM automatically creates and deletes the message queue for you. For example, when the LIBB is synchronized, OS/400 sends the synchronization message to message queue SALES. The SALES message queue is monitored by the MONSWABRM command and is created in the QUSRBRM library when the control group processing is started. This message queue is automatically deleted when the SNDMSG command defined in the MONSWABRM command is processed. The SNDMSG command sends a message to QSYSOPR informing you that the application can be used. Instead of invoking the SNDMSG command, you can start another process such as release a job queue, start a subsystem, or call a program. You may not want to use the STRSBSBRM command until the last exit, because this starts all of the subsystems that were ended by the control group. This assumes that you defined the subsystems to end in your control group. The preceding example allows you to release applications to the users as and when they are available. The disadvantage here is that BRMS/400 has to perform four separate save commands to save the four libraries. 4.3.5.3 Example 3 In this example, the MONSWABRM command is not used at all. This is only possible if you are at V3R6 or later or at V3R2. If all you want from the save-while-active function is the message when the libraries reach synchronization point, you can use SWA Message Queue as shown in Figure 49. Figure 49. Save-while-active example 3 The OPER01 message queue is used by the system to log the following messages: 0 of 4 libraries processed. Started LIBA at 02/03/01 10:20:06. 1 of 4 libraries processed. Started LIBA at 02/03/01 10:20:07. By default, the backup control group job and all of the MONSWABRM jobs are submitted to QBATCH subsystem. You must ensure that you have enough activity levels to perform your control group save and process all of the MONSWABRM commands. If you prefer, you can use another subsystem by specifying the job queue name or the job description name in the STRBKUBRM or the MONSWABRM commands. Note Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 LIBA *DFTACT *NO *SYNCLIB OPER01 20 LIBB *DFTACT *NO *SYNCLIB *LIB 40 LIBC *DFTACT *NO *SYNCLIB *LIB 50 LIBD *DFTACT *NO *SYNCLIB *LIB Chapter 4. Managing BRMS/400 81 2 of 4 libraries processed. Started LIBC at 02/03/01 10:20:07. 3 of 4 libraries processed. Started LIBD at 02/03/01 10:20:08. Now completing save-while-active checkpoint processing. Save-while-active checkpoint processing complete. BRMS/400 uses the first message queue to monitor for the synchronization. Even if you were to specify OPER02, OPER03, and OPER04 as the message queues for LIBB, LIBC, and LIBD, the save-while-active synchronization goes to message queue OPER01 as previously shown. If you require synchronization messages to go to different message queues, you must separate your control group entries for libraries by using operations such as *EXIT or *LOAD. BRMS/400 also separates the library groups if it detects a change of value in the Retain Object Detail, Weekly Activity, or the Save While Active field. 4.3.5.4 Example 4 The MONSWABRM command is not used in this example. The objective here is to use multiple message queues to monitor for save-while-active synchronization. See Figure 50 on page 82. At the time this redbook was written, BRMS/400 required the message queue to exist in QUSRBRM when processing the save. If you are using the MONSWABRM command, you do not have to create any message queues. If you are not using the MONSWABRM command, you must ensure that you create a message queue in the QUSRBRM library with the same name as that value you specified in the SWA Message Queue field. This implementation is being enhanced so that BRMS/400 now looks at the QUSRBRM library first for the message queue. If it cannot find the message queue in the QUSRBRM library, it searches your library list. With these enhancements, you can specify QSYSOPR as the message queue for receiving synchronization messages. The availability for this functional enhancement can be tracked by reviewing APAR SA61101. This APAR is updated with the PTF information when the PTFs are released. Until the PTF is available and applied, you must create a message queue in the QUSRBRM library to match the message queue value you used in the control group. Use the Create Message Queue (CRTMSGQ) command to create a message queue such as SWAMSGQ. Important 82 Backup Recovery and Media Services for OS/400 Figure 50. Save-while-active example 4 In the example in Figure 50, BRMS/400 performs three save operations. The first save operation saves LIBA and LIBB and sends the synchronization message to OPER01 message queue. BRMS/400 processes LIBC and sends the synchronization message to OPER02 message queue. LIBD and LIBE libraries are processed last and a single synchronization message is sent to the OPER03 message queue. In all, BRMS/400 performs three separate save operations for this control group. You see that in the preceding example, we did not use an *EXIT or a *LOAD operation to separate the saves in the control group. BRMS/400 automatically issues another save operation when it detects a change in the way you want to perform your save-while-active operation. In this example, it detected a change in the Save-while-active field. 4.3.5.5 Example 5 In the example shown in Figure 51, we use special values, such as *ALLPROD or *ALLTEST, in the MONSWABRM command with the save-while-active function. Figure 51. Save-while-active example 5 The exits have the following settings for the MONSWABRM command: 20 - MONSWABRM LIB(PRODMSGQ) CMD(SNDMSG MSG('All production libraries are synchronized.') TOUSR(*SYSOPR)) 40 - MONSWABRM LIB(TESTMSGQ) CMD(SNDMSG MSG('All test libraries are been synchronized.') TOUSR(*SYSOPR)) When the *ALLPROD set of libraries reaches a synchronization point, the system sends the message to PRODMSGQ. PRODMSGQ is monitored by the MONSWABRM command. As soon as it receives the message from the system, it processes the command specified in the CMD value. The control group goes on to process the *ALLTEST libraries and performs similar tasks as for *ALLPROD processing. Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 LIBA *DFTACT *NO *SYNCLIB OPER01 20 LIBB *DFTACT *NO *SYNCLIB *LIB 30 LIBC *DFTACT *NO *LIB OPER02 40 LIBD *DFTACT *NO *SYNCLIB OPER03 50 LIBE *DFTACT *NO *SYNCLIB *LIB Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 *EXIT *DFTACT 20 *EXIT *DFTACT 30 *ALLPROD *DFTACT *NO *SYNCLIB PRODMSGQ 40 *EXIT *DFTACT 50 *ALLTEST *DFTACT *NO *SYNCLIB TESTMSGQ Chapter 4. Managing BRMS/400 83 The advantage of using this approach, where the SWA Message Queue value matches with the LIB value on the MONSWABRM command, is that you do not have to remember the name of the first library that appears in your list. You also do not need to know how BRMS/400 builds the list of libraries for save processing. This list may not always appear in alphabetical sequence when BRMS/400 is performing an incremental save. For example, you have libraries APROD, BPROD, and CPROD as your production libraries. You know that BRMS/400 always uses the first library in sequence to check for save-while-active messages. Your MONSWABRM command contains APROD for the LIB value, and the control group defaults to *LIB for SWA Message Queue. You already performed a full save on Sunday. Between the full save and the next incremental save, you created a new library called AAPROD and have not updated the exit for the MONSWABRM command. When you process the control group on Monday for incremental saves, BRMS/400 looks at the last save date and time for all of the libraries and builds a list of libraries for the save operation. This list has AAPROD ahead of APROD library. Thus, your MONSWABRM command does not receive any save-while-active messages to libraries reaching synchronization. Therefore, we recommend that you specify a name of a message queue in the SWA Message Queue field and use the same name for the LIB value in the MONSWABRM command. This always ensures that you get synchronization messages, regardless of how BRMS/400 builds a list of libraries for save processing. Remember that the message queue is in the QUSRBRM library. This message queue is locked by the save-while-active operation and, therefore, cannot be saved. When you use the save-while-active function to save *ALLPROD or *ALLUSR (not recommended), you see the CPF3761 message in the BRMS log indicating that the save operation cannot use the message queue you are monitoring in library QUSRBRM. You also see the CPI3711 message in the message queue in library QUSRBRM (such as PRODMSGQ in our example) as follows: Message . . . . : Save-while-active request ended abnormally on library QUSRBRM. Cause . . . . . : The save-while-active request ended abnormally on library QUSRBRM. Libraries following this library were not saved. Press F10 or use the Display Job Log (DSPJOBLOG) command to see any previously listed messages in the job log. Correct the errors and try the request again. This is normal. BRMS/400 saves library QUSRBRM at the end of the save operation, so there are no libraries that require to be saved after the QUSRBRM library. Hint 84 Backup Recovery and Media Services for OS/400 4.4 Saving spooled files using BRMS/400 Within BRMS/400, you create a backup list to specify the output queues that you want to save using the backup control groups. Figure 52 shows how you can create a spooled file backup list. Figure 52. Including and excluding spooled file entries in backup list Within a single spooled file list, you can add multiple output queues that you want to save by selecting multiple sequence numbers. When you add the output queues, you can select the type of spooled file names, job names, or user names that you want to save. For example, if you only want to save spooled files that belong to USERA, you can specify the name of this user in the User field. You can also select generic names or job names. This example saves output queue SAVEOUTQ from library QGPL. You can leave the OUTQ default to *ALL. In this case, BRMS/400 saves all spooled files from all output queues from the QGPL library. If you want to omit an output queue, you can use the *EXC value to exclude it. We strongly recommend that you avoid using the *ALLUSR value for save-while-active processing because of the additional performance impact. OS/400 does not allow SAVLIB LIB(*ALLUSR) or SAVLIB(*IBM) when using the *SYNCLIB function. The *ALLUSR value is only supported when you use the SAVCHGOBJ command. See Backup and Recovery - Advanced, SC41-4305, for additional information. These OS/400 restrictions also apply to BRMS/400. Note Change Spooled File List SYSTEM09 Use . . . . . . . . . : *BKU List name . . . . . . : SAVESPLF Text . . . . . . . . . Output Queue to be Saved by BRMS Type choices, press Enter. *INC/ Seq Library Outq File Job User User data *EXC ___ __________ __________ __________ __________ __________ __________ ____ 10 QGPL SAVEOUTQ *ALL *ALL *ALL *ALL *INC Chapter 4. Managing BRMS/400 85 Once you have set up a backup list, you can add this list to your daily, weekly, or monthly backup control group as a backup item and a list type of *SPL. BRMS/400 automatically saves the spooled files whenever the control group is processed for backups. Figure 53 shows a backup control group especially created to save spooled files using the backup list that was created earlier. Figure 53. Backup list SAVESPLF Once you have successfully saved the spooled files, you can use the Work with Spooled Files for BRM (WRKSPLFBRM) command to display the status of your saves. You see that your spooled files are organized in the date and time order in which they were created on the system (Figure 54). Figure 54. Work with Saved Spooled Files (WRKSPLFBRM) Incremental saves of spooled files are not supported. If you specify an incremental save for an *SPL list type, all spooled files in the list are saved. When the spooled files are successfully saved to a save file or to a tape media, BRMS/400 does not automatically clear the output queue. You have to manage how you want to clear data from your output queues. We recommend that you obtain a hardcopy of your output queue immediately after the BRMS/400 save is completed for audit purposes. Use the Work with Output Queue (WRKOUTQ) command with the OUTPUT(*PRINT) option. Note Edit Backup Control Group Entries SYSTEM09 Group . . . . . . . . . . : BKUSPLF Default activity . . . . . FFFFFFF Text . . . . . . . . . . . Control Group to Backup Spooled Files Type information, press Enter. Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue 10 SAVESPLF *SPL *DFTACT Work with Saved Spooled Files SYSTEM09 Position to date . . . . Type options, press Enter. 4=Remove 5=Display 6=Work with media 7=Restore spooled file Opt Library Outq File Job User Date Time QGPL SAVEOUTQ QP1AVER DSP08 USER103D 5/30/00 13:15:55 QGPL SAVEOUTQ QP1AEP DSP08 USER103D 5/30/00 13:16:01 QGPL SAVEOUTQ QP1AVMS DSP08 USER103D 5/30/00 13:16:19 QGPL SAVEOUTQ QP1AMM DSP08 USER103D 5/30/00 13:16:40 QGPL SAVEOUTQ QP1AHS DSP08 USER103D 5/30/00 13:16:46 QGPL SAVEOUTQ QP1ALE DSP08 USER103D 5/30/00 13:16:49 QGPL SAVEOUTQ QP1ARCY DSP08 USER103D 5/30/00 13:19:23 86 Backup Recovery and Media Services for OS/400 You need to use the WRKSPLFBRM command or the WRKMEDIBRM command to restore the spooled files. From the Work with Saved Spooled Files display, select option 7 to restore the spooled files that you want to recover. This takes you to the Select Recovery Items display (Figure 55). Figure 55. Select Recovery Items By default, BRMS/400 restores your spooled data into the output queue from which it was saved. You may override the defaults by selecting function key F9 from the Select Recovery Items display to change the recovery defaults. During the save and restore, BRMS/400 retains the spooled file attributes, file name, user name, user data field, and, in most cases, the job name. OS/400 assigns new job numbers, a system date, and a time of the restore operation. The original date and time cannot be restored. Once you restore the output queue, you can use the WRKOUTQ command with OPTION(*PRINT) to spool the contents of the output queue. You can use this report to compare with the original report that you produced after saving the output queue. BRMS/400 does not automatically create the spooled files that you saved when you restore your user data on your system. You have to recover your spooled files using the WRKSPLFBRM command and perform the appropriate actions to restore the spooled files. Important Select Recovery Items SYSTEM09 Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display .............................................................................. : : : Restore Command Defaults : : : : Type information, press Enter. : : Device . . . . . . . . . . . . . . *MEDCLS Name, *MEDCLS : : End of tape option . . . . . . . . *REWIND *REWIND, *LEAVE, *UNLOAD : : Restore to output queue . . . . . . *SAVOUTQ Name, *SAVOUTQ : : Library . . . . . . . . . . . . . ________ Name, *LIBL : : : : F12=Cancel : : : : : :............................................................................: 1 QGPL SAVEOUTQ QP1ALE DSP08 USER103D *SAVF 1 QGPL SAVEOUTQ QP1ARCY DSP08 USER103D *SAVF More.... F3=Exit F5=Refresh F9=Recovery defaults F12=Cancel F14=Submit to batch F16=Select all Chapter 4. Managing BRMS/400 87 4.5 BRMS/400 console monitor In BRMS/400 V3R2 and later, a SAVSYS can be done in unattended mode from the system console using the console monitor. Console monitor puts the system console in a monitored state, but you can suspend the console to enter OS/400 commands and put the console back to a monitor state. 4.5.1 Console monitor function The goal of console monitoring is to allow the users to submit the SAVSYS job to batch instead of doing it interactively. Previously, SAVSYS, SAVSYSBRM, or STRBKUBRM with *SAVSYS required interactive processing. Now, there is a new option in the STRBKUBRM command. The Submit to Batch option allows you to enter *CONSOLE as a parameter. It also allows you to perform your saves in batch mode. You no longer need to be in the machine room or have an attended environment to perform a system save. However, you must start the console monitoring function on the system console prior to leaving the machine to operate in unattended mode. You can do this by selecting option 2 (Backup) from the BRMS main menu and selecting option 4 (Start Console monitor) from the BRMBKU menu. See Figure 56 on page 88 for details on the STRBKUBRM command and the optional parameters. For example, if you schedule the STRBKUBRM SUBMIT(*CONSOLE) command to run on Sunday at 2:00 a.m., you have to start the console monitor on the system console before you leave your office. You must perform this on the system console because it requires the job to run in the QCTL subsystem. If you attempt to start the console monitor from your workstation, you receive the BRMS/400 BRM1947 error message: Not in a correct environment to start the console monitor. Internally, BRMS/400 saves the spooled files as a single folder, with multiple documents (spooled members) within that folder. During restore, it reads the tape label for the folder and restores all of the documents. If your spooled file save happens to span multiple tape volumes, you will be prompted to load the first tape to read the label information, before you restore the documents in the subsequent tapes. Therefore, we recommend that you plan to save your spooled files on a separate tape using the *LOAD exit in the control group, or split your spooled file saves so that you are only using one tape at a time. This approach will help you during your spooled file recovery. Note 88 Backup Recovery and Media Services for OS/400 Figure 56. Start console monitor option on the Backup menu If you are on the system console, you can start the console mode with option 4 from the BRMBKU menu. Once you start the console monitor, the console waits for a BRMS/400 command to be processed (Figure 57). Figure 57. Console Monitor active • Once you start console monitoring, the console waits for a BRMS/400 command to process. You can suspend the console to process commands. However, during this period, if BRMS/400 tries to start a backup using *CONSOLE, it is delayed until you finish your command and return to the monitoring status. If you forget to exit from the command line, BRMS/400 cannot process any backup group using the SUBMIT(*CONSOLE) parameter. If this situation occurs and you realize it the next day, do not end the command line immediately. If you do, your nightly BRMS/400 backup using *CONSOLE is processed. Since this is probably a SAVSYS (since console monitoring is mainly designed for SAVSYS backups), it ends all of your subsystems which is not what you may want the system to do. Therefore, before you end the command line entry on the console monitoring, you should invoke system request on DSP01 (in console monitoring mode) and select option 2 to cancel the previous request. This stops console monitoring. Once you restart the console monitoring, all of the previous requests are cleared so your previous SAVSYS does not restart. • In V3R6, there is nothing (other than physical access security) to stop a person from going to the console display and selecting PF3 to end console monitoring. Once ended, the console is still signed on with your user authority. You can prevent this from happening by securing the console monitor. See 4.5.2, “Securing the console monitor” on page 90. Hint BRMBKU Backup System: SYSTEM05 Select one of the following: 1. Backup planning 2. Perform backup 3. Display backup activity 4. Start console monitor Console Monitor Press F12 to cancel the monitor operation. Press F9 to access command line. Control must return to this display for BRMS activity to be monitored. Chapter 4. Managing BRMS/400 89 The use of the console monitor is provided by the special value *CONSOLE on the Submit Job (SBMJOB) parameter of the STRBKUBRM command (Figure 58). Figure 58. Submitting a system save to batch using the console monitor If you want to interrupt the console monitor, press F9 and enter your password. If you entered the correct password, a pop-up window is shown where you can enter OS/400 commands (Figure 59). Figure 59. Command line access from the console monitor When the console monitor is interrupted, any requests submitted through the console monitor are queued and not processed until you complete your command and return to the console monitoring status. If you forget to return from the command line, BRMS/400 does not process any queued backups that were submitted. Start Backup using BRM (STRBKUBRM) Type choices, press Enter. Control group . . . . . . . . . > SAVSYS *BKUGRP, *SYSGRP, SAVSYS... Schedule time . . . . . . . . . *IMMED hhmm, *IMMED Submit to batch . . . . . . . . *CONSOLE *CONSOLE, *YES, *NO Starting sequence: Number . . . . . . . . . . . . *FIRST 1-9999, *FIRST Library . . . . . . . . . . . *FIRST Name, *FIRST Append to media . . . . . . . . *CTLGRPATR *CTLGRPATR, *BKUPCY, *YES... Job description . . . . . . . . *USRPRF Name, *USRPRF Library . . . . . . . . . . . Name, *LIBL, *CURLIB Job queue . . . . . . . . . . . *JOBD Name, *JOBD Library . . . . . . . . . . . Name, *LIBL, *CURLIB Console Monitor Security Press F12 to cancel the access command line function. Type choice, press Enter. Current user ID . . . . . . . . . . . . . CONSOLE Enter password to verify . . . . . . . . . Current password .............................................................................. : Command : : : : ===> : : F4=Prompt F9=Retrieve F12=Cancel : : : :............................................................................: 90 Backup Recovery and Media Services for OS/400 4.5.2 Securing the console monitor Once you start the console monitor, your password is required before the monitor suspends itself to provide a command line. In V3R6, there is no password to end the console monitor. Once it is ended, the console is again fully available just as it was before you selected the console monitor option from the BRMS/400 backup menu. To avoid this security exposure, you should create a new user profile (for example, CONSOLE) that has QBRM as the current library, calls the console monitor program (Q1ACCON) as its initial program, and uses the *SIGNOFF menu as its initial menu (Figure 60). Figure 60. Initial program to secure the console monitor Signing on at the system console with this user profile starts the console monitor. You can use F9 to enter commands on this display only if you enter the CONSOLE profile password. Any attempt to end the console monitor results in a sign off. 4.5.3 Monitoring the console monitor BRMS/400 logs the following messages that help monitor the console monitor: BRM1948 'BRMS Console monitoring is now started' when you start the console monitoring BRM1950 'BRMS Console monitoring is inactive' when you use the command line entry (PF9) BRM1954 'BRMS Console monitoring is now ending' when you quit the console monitoring (PF3) 4.5.4 Canceling the console monitor If you want to end the console monitor, use the F3 or F12 key. In V3R6, there is no password required to end this function. You return to where you were before you selected the console monitor. In V3R2, if you exit the console monitor with F3 or F12, the Console Monitor Exit display is shown to enter a password (Figure 61). Create User Profile (CRTUSRPRF) Type choices, press Enter. User profile . . . . . . . . . . > CONSOLE Name User password . . . . . . . . . Name, *USRPRF, *NONE Set password to expired . . . . *NO *NO, *YES Status . . . . . . . . . . . . . *ENABLED *ENABLED, *DISABLED User class . . . . . . . . . . . *SECOFR *USER, *SYSOPR, *PGMR... Assistance level . . . . . . . . *SYSVAL *SYSVAL, *BASIC, *INTERMED... Current library . . . . . . . . QBRM Name, *CRTDFT Initial program to call . . . . Q1ACCON Name, *NONE Library . . . . . . . . . . . QBRM Name, *LIBL, *CURLIB Initial menu . . . . . . . . . . *SIGNOFF Name, *SIGNOFF Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Limit capabilities . . . . . . . *NO *NO, *PARTIAL, *YES Text 'description' . . . . . . . . BRMS/400 Console Monitor Profile Chapter 4. Managing BRMS/400 91 Figure 61. Console Monitor Exit 4.6 Job scheduling and BRMS/400 Many of the functions performed by BRMS/400 are well suited to run under the control of a job scheduler (for example, scheduling a backup when nightly processing has completed, or scheduling the MOVMEDBRM and STRMNTBRM commands across a network). With the Console Monitor function, you can now also schedule an unattended system save. 4.6.1 Using the OS/400 job scheduler BRMS/400 provides a direct interface to the OS/400 job scheduler to process both backup and archive control groups (Figure 62). Figure 62. Work with Backup Control Groups display You can add a control group to the schedule by entering 6 in the Opt column for the relevant control group 1. You may also enter 6 in the option column of the first line of the display and the name of the control group in the 2 Control Group field. This takes you to the OS/400 Add Job Schedule Entry display as shown in Figure 63 on page 92, where BRMS/400 automatically completes the job name and command to run fields. You should enter scheduling details in the lower half of the display and any additional parameters (F10=Additional parameters) not shown on the initial display. Console Monitor Exit Press F12 to cancel the exit console monitor function. Type choice, press Enter. Current user ID . . . . . . . . . . . . CONSOLE Enter password to verify . . . . . . . . Current password Work with Backup Control Groups SYSTEM01 Position to . . . . . . Starting characters Type options, press Enter 1=Create 2=Edit entries 3=Copy 4=Delete 5=Display 6=Add to schedule 8=Change attributes 9=Subsystems to process ... Full Incr Weekly Control Media Media Activity Opt Group Policy Policy SMTWTFS Text 2 6 SAVFIFS *BKUGRP *BKUPCY *BKUPCY *BKUPCY Entry created by BRM configura *SYSGRP SAVSYS SAVSYS *BKUPCY Entry created by BRM configura DUPTAP01 FULLR FULLR FFFFFFF DUPTAP01 SAVE to REEL 6250 EDELM09 FULL INCR FFFFFFF Edelgard's SAVE DEREK FULL FULL FFFFFFF RDARS1 RDARS RDARS FFFFFFF DUPTAP01 SAVE to REEL 6250 1 6 SAVFIFS SAVFP SAVFP FFFFFFF Backup FSIOP with SAVF / SWA=* SAVFIFS2 SAVFP SAVFP FFFFFFF Backup FSIOP with SAVF / SWA=* 92 Backup Recovery and Media Services for OS/400 Figure 63. Add Job Schedule Entry display 4.6.2 Submitting jobs to the OS/400 job scheduler You can choose to add your own BRMS/400 jobs to the OS/400 scheduler using the ADDJOBSCDE command. BRMS/400 searches the Command to run character string for “BRM” for jobs to include in the Work with BRM Job Schedule Entries display. Although most BRMS/400 commands have “BRM” as a suffix, some do not, and these do not appear unless you use the QBRM library qualification. Only those jobs that do not generate an interactive display can be submitted to a job scheduler. This precludes scheduling recovery with the STRRCYBRM command, but allows you to schedule the recovery report. 4.6.3 Working with scheduled jobs To work with the BRMS/400 jobs that have already been added to the scheduler, press F7 on the Work with Backup Control Groups display (see Figure 62 on page 91). This takes you to the Work with BRM Job Schedule Entries display shown in Figure 64. Figure 64. Work with BRM Job Schedule Entries The Work with BRM Job Schedule Entries display allows you to change, hold, remove, work with, or release scheduled jobs. It is similar to the OS/400 Work Add Job Schedule Entry (ADDJOBSCDE) Type choices, press Enter. Job name . . . . . . . . . . . . > QBRMBKUP Name, *JOBD Command to run . . . . . . . . . > STRBKUBRM CTLGRP(SAVFIFS) SBMJOB(*NO) Frequency . . . . . . . . . . . > *WEEKLY *ONCE, *WEEKLY, *MONTHLY Schedule date, or . . . . . . . > *NONE Date, *CURRENT, *MONTHSTR... Schedule day . . . . . . . . . . > *ALL *NONE, *ALL, *MON, *TUE... + for more values Schedule time . . . . . . . . . > '00:01' Time, *CURRENT Work with BRM Job Schedule Entries SYSTEM01 Type options, press Enter. 2=Change 3=Hold 4=Remove 5=Work with 6=Release Next -----Schedule------ Recovery Submit Opt Job Status Date Time Frequency Action Date BRMSAVE SCD *NONE 23:30:00 *WEEKLY *SBMRLS 05/06/00 QBRMBKU SCD *ALL 22:00:00 *ONCE *NOSBM 06/06/00 Chapter 4. Managing BRMS/400 93 with Job Schedule Entries display, but does not allow all options. You may, however, add a new job to the schedule by using F6 (Add). If you choose option 4 (Remove) in the Work with BRM Job Schedule Entries display (Figure 64), a confirmation display is not shown. Your selected entries are removed immediately. You can also access scheduled jobs from the BRMS/400 Scheduling menu. Option 1 on the BRMS Scheduling menu shows all BRMS/400 scheduled jobs (including those added manually to the scheduler) as shown in Figure 64. Option 2 shows all scheduled jobs. 4.6.4 Using BRMS/400 commands in job scheduler for OS/400 An alternative job scheduler can easily be used with BRMS/400 commands. You are responsible for adding the BRMS/400 commands to your chosen job scheduler. Appendix A in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) has a full list of BRMS/400 commands. Remember, only those commands that do not generate an interactive display can be submitted to a job scheduler. Job scheduler for OS/400 already works with BRMS/400, and BRMS/400 allows you to tailor its functions so that job scheduler for OS/400 commands are automatically invoked when certain BRMS/400 options are selected. Use option 3 (Change Job Scheduler) from the BRMS scheduling menu or the Change Job Scheduler (CHGSCDBRM) command with a prompt. You are prompted with the display shown in Figure 65. Figure 65. Changing job scheduler in BRMS/400 In V3R2 and V3R7, the *IJS option was added for the Scheduler type parameter on the CHGSCDBRM command. If you are using job scheduler for OS/400 and are happy with the BRMS/400 defaults, you should choose this option. No further options are shown. If you are using a release other than V3R2 or V3R7, or if you are using a non-IBM scheduler, you should use the *USRDFN value. There are three parameters where you can define non-IBM scheduler CL commands to be executed. For each parameter, you can also specify whether you want to be prompted for the command at execution time. The following three parameters correspond to the BRMS/400 functions: • Add a job: Option 6, Add to schedule from the Work with Control Groups displays. • List jobs: Option 2, Work with all scheduled jobs from the BRMS/400 Scheduling menu. • Select jobs: Option 1, Work with all BRM scheduled jobs from the BRMS/400 Scheduling menu or F7 from the Work with Control Groups displays. Change Job Scheduler (CHGSCDBRM) Type choices, press Enter. Scheduler type . . . . . . . . . *USRDFN *SYSTEM, *IJS, *USRDFN 94 Backup Recovery and Media Services for OS/400 There are four substitution variables that can be specified in any of the command strings used on the parameters of the CHGSCDBRM command as previously described. BRMS/400 passes information to these four substitution variables depending on what BRMS/400 function is being used. The four variables are: • &JOBNAME: A BRMS/400 identifier assigned to every job: QBRMBKUP. • &REQUEST: The full BRMS/400 command to be submitted to the scheduler: STRBKUBRM or STRARCBRM with parameters (if applicable). • &APPL: Always contains “BRMS”. This can be used to assist a non-IBM scheduler locate jobs by an application code if they support this function. • &GROUP: Control group name (if applicable). Not all variables apply in each case. If the variable name is not relevant, an asterisk (*) is placed in the variable (Figure 66). Figure 66. Change Job Scheduler Before you can use &APPL, which contains “BRMS”, you need to set up the application in job scheduler for OS/400. You do this by selecting option 4 (Job Controls) from the main job scheduler for OS/400 menu and option 6 (Work with Applications). Figure 67 and Figure 68 show the prompts for creating the application BRMS. You are asked for contact names and other information, but you can create these by drilling down (the same as with BRMS/400). Change Job Scheduler (CHGSCDBRM) Type choices, press Enter. Scheduler type . . . . . . . . . *USRDFN *SYSTEM, *IJS, *USRDFN Add a job command . . . . . . . 'ADDJOBJS JOB(&JOBNAME) APP(&APPL) SCDCDE(*D AILY) TIME(2400) CMD(&REQUEST)' Command prompt for add . . . . . *YES *NO, *YES List jobs command . . . . . . . 'WRKJOBJS' Command prompt for list . . . . *NO *NO, *YES Select jobs command . . . . . . 'WRKJOBJS APP(&APPL)' Command prompt for select . . . *NO *NO, *YES Chapter 4. Managing BRMS/400 95 Figure 67. Work with Applications Figure 68. Adding a BRMS application to job scheduler for OS/400 4.6.5 Weekly activity and job scheduling You should be careful when specifying control groups if you intend to schedule backup and archiving. In a control group, you can specify an action to happen on specific days of the week. However, if there is a delay that causes your job to run later than expected, the control group may take a different action. Consider the example shown in Figure 69. Figure 69. Sample backup control group entries Work with Applications SYSTEM01 Position to . . . . . . Starting characters Type options, press Enter. 1=Add 2=Change 3=Copy 4=Remove 5=Display 6=Work with jobs 7=Hold application jobs 8=Release application jobs 9=Change application information Opt Application Text 1 BRMS Add Application SYSTEM01 Type information, press Enter. Application . . . . . . . . . . . . BRMS Application contact one . . . . . . Application contact two . . . . . . Application contact three . . ........................................ Application contact four . . : Select Application Contact : Application contact five . . : : Text . . . . . . . . . . . . : Type options, press Enter. : : 1=Select : : Opt Application contact : : 1 Derek McBryde : : Sejal Dave : : Genyphyr Novak : : Debbie Saugen : : : : Bottom : : F9=Work with application contacts : : F12=Cancel : : : F3=Exit F4=Prompt F12=Ca :......................................: Weekly Retain Save Backup List Activity Object While Seq Items Type SMTWTFS Detail Active 10 PGMLIB F *NO *NO 20 FILELIB* FIIIIII *NO *NO 96 Backup Recovery and Media Services for OS/400 Suppose the control group shown in Figure 69 on page 95 is scheduled to run each evening at 23:00. The job scheduler submits the backup job at 23:00 to the same job queue as the month-end batch job. On Saturday, the month-end job overruns and does not complete before midnight. The backup job, therefore, does not run until after midnight, which is on Sunday in our scenario. BRMS/400 looks at the weekly activity and can: • Do a full backup of the FILELIB* libraries. • Not save PGMLIB. To add to this, when the scheduler submits the control group to run again at 23:00 on Sunday evening, another full backup of the FILELIB* libraries is taken. If these saves are to save files, you can experience space problems. If you are saving to tape, you can run out of tapes. © Copyright IBM Corp. 1997, 2001 97 Chapter 5. BRMS/400 networking This chapter looks at how AS/400 systems with BRMS/400 can participate in a BRMS/400 network. It only covers networking AS/400 systems that have V3R2, V3R6, or V3R7 of BRMS/400 installed. Where appropriate, it includes information on the BRMS/400 V3R1 release. 5.1 Overview of BRMS/400 network By grouping multiple AS/400 systems in a BRMS/400 network group, you can share BRMS/400 policies, media information, devices, and storage locations across the network group. This allows you to manage the backup and archiving of all your AS/400 systems in a consistent manner, as well as optimizing the use of your media and media devices. Each AS/400 system that is a member of a network group receives updates to the media inventory, regardless of which network member makes the change. Therefore, if you have a network of four AS/400 systems (SYSTEM01, SYSTEM02, SYSTEM03, and SYSTEM04), and you add a media volume (A001) on SYSTEM01, the information about this new volume is propagated on all other systems. Information shared between systems in the shared media inventory environment includes: • Media inventory • Media class • Media policy • Container inventory • Container class • Move policy • Network group • Storage location • Duplication cross reference Before you set up your network, it is extremely important that you have installed on your system the PTF related to the enhancements made to the Copy Media Information using BRM (CPYMEDIBRM) command. The CPYMEDIBRM command copies media inventory information to a work file or copies the contents of the work file to the media inventory. The actual usage of this command in a BRMS/400 network group is discussed later in this chapter. Based on the version and release you are using, you should have the following PTF installed for your version and release: • V3R1 - SF34449 • V3R2 - SF34452 • V3R6 - SF34453 • V3R7 - SF34454 With the PTF applied, the CPYMEDIBRM command saves the following information: • Containers, container classes, move policies, move policy rules, and locations are now included in the *TOFILE function. • If they do not already exist, containers, container classes, move policies, move policy rules, and locations are added to the *FROMFILE functions. This 98 Backup Recovery and Media Services for OS/400 allows volumes to be added in the media file that were rejected in the past due to this information not being available. • Volumes are now stamped with “CPYMEDIBRM” as the job name when they are added to the media file. • All time stamps are updated for all records written so that additions are synchronized in a network environment. • All added volumes are registered to the new system, if available. • History information is no longer deleted from the CPYMEDIBRM file. • History information is added for any volumes added by a CPYMEDIBRM command. • Files created with the CPYMEDIBRM OPTION(*TOFILE) command prior to applying this PTF are supported as before. You should also ensure that you have the latest BRMS/400 PTFs applied on your system. 5.2 How shared media inventory synchronization works Assume that you have SYSTEM01, SYSTEM02, and SYSTEM03 in your network (independent of whether the link is APPC or APPN). When your BRMS/400 network is set up, you see that the Q1ABRMNET subsystem is started on all of the AS/400 systems that are participating in the BRMS/400 network. See Figure 70. Note: The subsystem descriptions, job descriptions, and the job queue that BRMS/400 uses are stored in the QBRM library. Media and history records that are added have the system name changed to the new system name with the *FROMFILE function. The *TOFILE function copies the media and history records owned by the current system. Note Chapter 5. BRMS/400 networking 99 Figure 70. BRMS/400 synchronization process BRMS/400 uses the following process to update data across the network. BRMS/400 journals the files containing the shared resources. These files are QA1AMM for the media and QA1A1RMT for the systems in the network group. When SYSTEM01 updates media, a policy, or any shared resources, an entry is logged in a BRMS/400 journal QJ1ACM in the QUSRBRM library. BRMS/400 captures both before images and after images in the journal receiver for any changes that are made related to media inventory on the systems in the network. However, only the after images are used to update the shared media inventory. The Q1ABRMNET subsystem starts an autostart job called QBRMNET, which calls a CL program Q1ACNET. This job uses a job description of Q1ACNETJD in the QBRM library. The Q1ACNET program periodically monitors for journal entries that arrive in the QJ1ACM journal and performs the following tasks: 1. The Q1ACNET CL program calls Q1ARNET when the wait time has expired. The Q1ARNET program reads the QR1ANE data area in the QUSRBRM library for the last journal entry it processed and checks journal QJ1ACM to see if there are any new journal entries. If there are new journal entries, Q1ARNET pulls the journal entry, adds a system name and network identifier to it, and for each update in the journal receiver, it creates a new record for each system in the network group (except for the system where the update was made). This data is written in the QA1ANET file. BRMS/400 obtains information about the systems that are in the network group from the QA1A1RMT file in the QUSRBRM library. The Q1ARNET program updates the data areas after each record is processed. The Q1ARNET program also creates a record in the QA1A2NET file in the QUSRBRM library for each file and system reflected in the journal entries. On SYSTEM01 CHGMEDBRM VOL(A001) QA1AMM file updates Creates a new record QJ1ACM journal Q1ACNETl Q1A1ARMT file contents: SYSTEM01 SYSTEM02 SYSTEM03 Reads Q1A1ARMT file contents: SYSTEM02 A001 SYSTEM03 A001 Submits Writes QBRMSYNC job in Q1ABRMNET sbs Reads then deletes if successful Job in SBS Q1ABRMNET (default) Opens remote QA1AMM file Checks A001 date/time stamp Updates if SYSTEM01 record newer Job in SBS Q1ABRMNET (default) Opens remote QA1AMM file Checks A001 date/time stamp Updates if SYSTEM01 record newer DDM link with SYSTEM03 100 Backup Recovery and Media Services for OS/400 In our example shown in Figure 70 on page 99, there are three systems in the network group. When we make updates to SYSTEM01, the Q1ACNET program creates two entries in the QA1ANET file referring to the updates that need to be sent to the remaining two systems (SYSTEM02 and SYSTEM03) that are participating in the BRMS/400 network. 2. At regular intervals, the Q1ACNET program in subsystem Q1ABRMNET checks to determine if media activity has occurred that should be transferred to other systems in the network group. When there is data in the QA1ANET file, it submits the QBRMSYNC job through the Q1ABRMNET job queue. The QBRMSYNC job uses a job description of QBRMSYNC and calls the Q1ACSYN program. Using QA1A2NET as a key, records are read from the QA1ANET file. A Distributed Data Management (DDM) link is established with the remote system to update the corresponding file on the remote system. The DDM files can be recognized in the QTEMP library because they have the name QA1A--D, where “--” refers to the file name such as QA1AMMD for media inventory. The suffix of “D” indicates that it is a DDM file. – Before performing the update, it first checks the date and time stamp of the record to be updated with the date and time stamp of the update itself. – If the update has an older time stamp, the update request is rejected. Once this update is done, Q1ACSYN deletes the record from the QA1ANET file and reads the next record until all of the records have been processed. The QBRMSYNC job ends when the QA1ANET file is empty. If you have any doubt that this process is not working satisfactorily, you can display the QA1ANET file to see if it contains any records. If the number of records is not zero, or is not decreasing, you may have a problem with the network. Check that there are no messages on the QSYSOPR message queue on all of the networked systems. You also need to check that: • Subsystem Q1ABRMNET is started. • Job queue Q1ABRMNET is released. • APPC controllers are varied on. • QBRMS user profile is not in the *DISABLED state. Note: BRMS/400 always attempts to go through the Q1ABRMNET subsystem first for network synchronization tasks. This subsystem has a default communications entry using the QBRM mode. We recommend that you do not create your own subsystem descriptions for synchronizing the BRMS/400 network. See 5.3.1, “Network security considerations” on page 101, for additional information. The interval (or delay) value used to synchronize media information within a BRMS/400 netowk can be set between 30 and 9999 seconds using the Shared Inventory Delay parameter in the System Policy for V3R2, V3R6, or V3R7 systems. For V3R1 systems, this delay is fixed at 60 seconds and cannot be changed. Hint Chapter 5. BRMS/400 networking 101 5.3 Network communications for BRMS/400 As with many communication products, BRMS/400 also uses the default local location name LCLLOCNAME and not the system name SYSNAME. In most cases, the AS/400 systems have the same value specified in the LCLLOCNAME as in the SYSNAME. BRMS/400 also uses the local network identifier LCLNETID. Other network attributes have no effect on BRMS/400. These network values are defined in the network attributes and can be changed using the Change Network Attribute (CHGNETA) command. You can display the values using the Display Network Attribute (DSPNETA) command. If you are using APPN with auto configuration, communications between AS/400 systems should be relatively simple. If display station pass through works fine and you can use SNA distribution services (SNADS) successfully, there is every chance that BRMS/400 networking will also work. Also with APPN, and auto configuration enabled, you do not have to manually re-create the APPC controller and APPC device descriptions if you decide to change your system name or your network identifier. You can simply vary off and delete the old controller and device descriptions and allow APPN to automatically re-create the definitions for you. If you use APPC communications, you have to create your own APPC controllers and devices. You must ensure that you specify correct information regarding the remote system when creating the controller description. For example, the Remote network identifier, Remote Control point, and Remote System Name values relate to the remote system. You also need to ensure that you are using the QBRM mode for the Mode parameter on the APPC device description. The default for this value is *NETATR, which uses the BLANK mode description, and your BRMS/400 network will not work. With APPC, you also need to ensure that you change your APPC controller device descriptions if you decide to change the name of your network or the local location name at a future date. The reason you have to do this is because you cannot delete and allow the system to automatically re-create your definitions as in APPN. 5.3.1 Network security considerations Beginning with V3R2 and V3R7, the OS/400 security implementation has been significantly enhanced. One of the enhancements that affects BRMS/400 is the change to *PUBLIC authority for IBM-supplied libraries from *CHANGE to *USE. A new user profile called QBRMS is now created at OS/400 installation time for V3R2 and V3R7. BRMS/400 objects are now owned by this user profile. You need to understand the following information when you have a mixture of V3R1, V3R6, V3R2, and V3R7 in a BRMS/400 network: • For APPN networks, check to see whether you are using secured locations or non-secured locations for your network. You can do this by using the Work with Configuration List (WRKCFGL *APPNRMT) command. Check the Secure Loc value. Figure 71 on page 102 shows an example. If the secure location is set to *NO, you are using a non-secured network. If the secured location is set to *YES, you are using secured location network. For 102 Backup Recovery and Media Services for OS/400 additional information on APPN security, see AS/400 APPN Support, SC41-5407. Figure 71. Work with Configuration Lists • If you have a non-secured network, you do not need to do anything on your systems. All you need to ensure is that the QBRMS (for V3R2 and V3R7), QUSER, and QPGMR user profiles are not disabled. • If you are using a secured APPN network, ensure that the new system you are adding to the network is also configured as a secured location. At the same time, if you have a V3R1 or a V3R6 system already in the BRMS/400 network and you have now added a V3R2 or a V3R7 system to the network, you need to carry out some simple tasks to enable the media synchronization between V3R1/V3R6 to V3R2/V3R7. This is because you do not have a QBRMS profile on a V3R1 or a V3R6 system. If you do nothing when you change media information on a V3R1 or a V3R6 system, you will have a problem and will notice that the updates are not sent to the target system. 5.3.1.1 Problem description Let us first look at what happens and define the solution. For example, assume that you have a source system (SYSTEM01) that is on V3R1. You also have a target system (SYSTEM02) that is on V3R2. When any updates are made on SYSTEM01, you notice that they are not synchronized on the SYSTEM02. Observe the status of the QBRMSYNC job under the Q1ABRMNET subsystem using the Work with Active Jobs (WRKACTJOB) command as shown in Figure 72. Figure 72. WRKACTJOB display The QBRMSYNC job is in MSGW (message wait) status, indicating that it is waiting for a message to be answered. Type option 7 next to the job to see the message. You see that Q1ARSYN (synchronizing program) is unable to perform a WRITE I/O operation on the target system through DDM. The message you see is shown in Figure 73. Work with Configuration Lists Configuration list . . . . . . . . : QAPPNRMT Configuration list type . . . . . : *APPNRMT Text . . . . . . . . . . . . . . . : -------------------APPN Remote Locations-------------------- Remote Remote Control Remote Network Local Control Point Secure Location ID Location Point Net ID Loc SYSTEM01 APPN SYSTEM02 SYSTEM01 APPN *YES SYSTEM06 APPN SYSTEM02 SYSTEM06 APPN *YES SYSTEM07 APPN SYSTEM02 SYSTEM07 APPN *YES Opt Subsystem/Job User Type CPU % Function Status __ Q1ABRMNET QSYS SBS .0 DEQW __ QBRMNET QPGMR ASJ .0 PGM-Q1ACNET TIMW 7 QBRMSYNC QPGMR BCH .0 PGM-Q1ACSYN MSGW Chapter 5. BRMS/400 networking 103 Figure 73. Additional Message Information On the target system (SYSTEM02), which is at V3R2, you can use the Work with Configuration Status (WRKCFGSTS) command to observe the status of your APPC device and controller. You notice that a mode is attached under the device. The mode that BRMS/400 uses is QBRM. You also notice that the mode uses the QPGMR user profile rather than QBRMS. From the Work with Configuration Status display, type option 5 next to the QBRM mode to work with the job, followed by option 10 to see the job log. You can see the error condition in Figure 74. Figure 74. Additional Message Information As the message suggests, you must have the appropriate authority to access files in the QUSRBRM library on the target system. 5.3.1.2 Problem solution To resolve the problem with authorities, use the following steps on a V3R1 or V3R6 system: 1. End the Q1ABRMNET subsystem. 2. Create a user profile called QBRMS as follows: CRTUSRPRF USRPRF(QBRMS) PASSWORD(*NONE) TEXT('User Profile for BRMS') 3. Change the job description Q1ACNETJD in the QBRM library as follows: CHGJOBD JOBD(QBRM/Q1ACNETJD) USER(QBRMS) Additional Message Information Message ID . . . . . . : RPG1299 Severity . . . . . . . : 99 Message type . . . . . : Inquiry Date sent . . . . . . : 01/23/01 Time sent . . . . . . : 12:26:51 Message . . . . : CPF5134 I/O error was detected in QA1AMMFR (C G S D F). Cause . . . . . : The RPG program Q1ARSYN in library QBRM received the message CPF5134 at statement 35600 while doing WRITE I/O operation on file QA1AMMFR. Actual file is QA1AMMD.QTEMP MEMBER - QA1AMM. See the job log for a complete description of message CPF5134. Additional Message Information Message ID . . . . . . : CPF5134 Severity . . . . . . . : 50 Message type . . . . . : Escape Date sent . . . . . . : 01/23/01 Time sent . . . . . . : 12:25:31 Message . . . . : Not authorized to process request on member QA1AMM. Cause . . . . . : You do not have the correct authority to process your request on member QA1AMM file QA1AMM in library QUSRBRM. To process your request, you need the following authority to either the member or to the physical members under the logical member: -- *READ authority to read the records in member QA1AMM. -- *ADD authority to add records to member QA1AMM. -- *UPD authority to update the records in member QA1AMM. -- *DLT authority to delete the records in member QA1AMM. Recovery . . . : Get the necessary authority from either the security officer or the owner of the file. Then try your request again. 104 Backup Recovery and Media Services for OS/400 4. Start the Q1ABRMNET subsystem. 5.4 Adding systems to a network group BRMS/400 is delivered with a predefined network group named *MEDINV. When it is delivered, *MEDINV contains no entries for systems participating in the network group. Setting up the BRMS/400 network group is simple as long as you follow the steps. Although the steps are fairly easy, you should take every precaution to ensure that proper planning has taken place and that you fully understand the implications of adding and removing systems from the BRMS/400 network. Some of the planning considerations that you should be aware of are: • Ensure that you have a full backup of the QUSRBRM library on all of your AS/400 systems that you plan to place in the network group. The BRMS/400 network setup modifies some critical files in the QUSRBRM library. You may have to restore the QUSRBRM libraries to their original state if things do not work out. • Check with your Support Center to ensure that you are up-to-date with your PTFs for BRMS/400 and dependant PTFs for OS/400 and Licensed Internal Code. • Ensure that there is no BRMS/400 activity on the systems that you are planning to network within the network group. All BRMS activity must be stopped prior to starting the network connection. • If you already have BRMS/400 operational on individual systems, ensure that the operation is error free and that there are no outstanding issues with the normal operations. It is also important to sit down and think about volume names, media policies, containers, and classes. Duplicate volume names are not allowed within a shared media inventory. See 2.1.3, “Media naming convention” on page 8, for suggestions on how you should define a naming convention for your BRMS/400 volumes. • If you are adding a new system to a network group, make sure your media license covers the additional media. See 2.2.1, “Updating BRMS/400 license information” on page 13, for additional information. Figure 75 provides a high-level overview of the steps that you need to follow when setting up a BRMS/400 network. The example assumes that one system is at V3R6 (SYSTEM05), and the other system is at V3R2 (SYSTEM09). We want to add SYSTEM05 to the network using SYSTEM09 as the master system. Both systems currently have BRMS/400 fully operational and have their own media inventory. They both also have unique volume names. We also verified that the LCLLOCNAME is the same as the system name and that the LCLNETID on both systems is set to ITSCNET. Chapter 5. BRMS/400 networking 105 Figure 75. Overview of establishing a BRMS/400 network • For systems that are completely new, the process to add them into the existing BRMS/400 network group is extremely simple. This is because the new system does not yet have its own media inventory. This removes the requirement to run the CPYMEDIBRM command to save and later reload the media information. 5.4.1 Receiving media information Every AS/400 system in a BRMS/400 network group receives media inventory updates, regardless of which system makes the change. Beginning with V3R6 and V3R2, you can select to have the media content information updated also. You can use the Receive media information option on the Change Network Group display. You can set this parameter to be *LIB. See the circled text in Figure 77 on page 107. The default for this field is *NONE, which indicates that only media information is to be shared with this system. This functional enhancement is not available on V3R1. This means that when you select the option to display the contents of a particular media on a V3R1 system, and the media is actually owned by another system, the V3R1 system has to use DDM to obtain the information you require. This requires a communications link to be active when Overview of steps required for setting up the BRMS network Steps for SYSTEM09 Steps for SYSTEM05 ==================== ==================== 1. Save the QUSRBRM library. 2. Save library QUSRBM. 3. Vary on Line and Controller. 4. Vary on Line and Controller. 5. SYSTEM09 designated as master system. 6. Ensure no BRMS activity is in progress. 7. Ensure that no BRMS activity is in progress. 8. Type GO BRMSYSPCY and select the following options: 4 - Change Network Group 1 - Add SYSTEM05 to network group 9. WRKMEDBRM - if entries exist, issue CPYMEDIBRM OPTION(*TOFILE) 10.INZBRM OPTION(*NETSYS) FROMSYS(SYSTEM09) Reply I for messages that appear. 11.Check whether QDATE is correct. 12.Check whether QDATE is correct. 13.INZBRM OPTION(*NETTIME) 14. CPYMEDBRM OPTION(FROMFILE) 15.WRKMEDBRM to see results. 16.WRKMEDBRM to see the results. Notes: = Execute the step on SYSTEM05. 106 Backup Recovery and Media Services for OS/400 the DDM request is invoked. Beginning with V3R6 and V3R2, with the *LIB option, when you select option 13 (Display Contents) on the WRKMEDBRM display, the system does not use DDM to obtain this data from the owning system. If you have a failure on the owning system or in communications, you can use the media information that has been synchronized to build a recovery report for the system that failed. This local database can be used to recover objects belonging to another system. You can change the Receive media information field at any time, and depending on the number of media information records you have, the synchronization process may take a long time. Note: We, therefore, recommend that you do not change the Receive media information field frequently. Be careful when adding systems to an existing network, especially if the system you are trying to add has been outside the network for a long time and contains media information. You definitely do not want to propagate media files from the system that has been down for weeks to a system that has been in the network the entire time. In other words, you must not run the INZBRM command with *NETSYS on the system that was operational at all times to place the “new” system back in the network. You have to run the INZBRM command with *NETSYS on the system that was down for a long time (for example, an upgrade) pointing to a system that was operational at all times using the FROMSYS parameter. If you have a 3494 media library device attached to multiple AS/400 systems in a BRMS network, we recommend that you have the library names the same across all of the AS/400 systems. Once you set up a BRMS/400 network, it is important that you verify on a regular basis that the network is working for you. See 5.9, “Verifying the BRMS/400 network” on page 120, for additional information. Perform the following tasks to add SYSTEM05 to the BRMS/400 network: 1. Save the QUSRBRM library on SYSTEM09. 2. Save the QUSRBRM library on SYSTEM05. 3. Ensure that the communications link on SYSTEM09 for SYSTEM05 is active. Use the WRKCFGSTS command to determine the status for line, controller, and device description. 4. Ensure that the communications link on SYSTEM05 for SYSTEM09 is active. Use the WRKCFGSTS command to determine the status for line, controller, and device description. 5. Designate SYSTEM09 to be your “master” system. 6. Ensure that there is no BRMS/400 activity on SYSTEM09 when you are setting up a network group. 7. Ensure that there is no BRMS/400 activity on SYSTEM05. 8. On SYSTEM09, enter GO BRMSYSPCY to go to the System Policy menu. a. Select option 4 (Change Network Group). Press Enter. Chapter 5. BRMS/400 networking 107 b. Add SYSTEM05 on the Change Network Group display as shown in Figure 76. Figure 76. Adding a new system to the network c. Press Enter. BRMS/400 searches the network for the system name that you specified. Depending on your network configuration and the number of systems you have in the network, this can take a few minutes. When the system is found (in our example, SYSTEM05), it is added to *MEDINV (the BRMS/400 network group name). As shown in Figure 77, the display is refreshed with the entry for SYSTEM05 added to the network group. SYSTEM05 is shown as an inactive member of a network group and is not sharing media files with other active network systems in the group at present. To change the inactive status to active, media files must be copied to the system that is being added to the network group. The process to copy media files and media content information occurs in step 10 on page 108. Figure 77. SYSTEM05 added to the network group 9. On SYSTEM05, use the Work with Media (WRKMEDBRM) command to see if you have any media information. If media information is not present, go to step 10. Change Network Group SYSTEM09 ITSCNET Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *NONE *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Local Remote Receive Opt Location Name Network ID Media Info Status 1 SYSTEM05 ITSCNET (No entries found) Change Network Group SYSTEM09 ITSCNET Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *NONE *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Local Remote Receive Opt Location Name Network ID Media Info Status SYSTEM05 ITSCNET *NONE Inactive Bottom F3=Exit F5=Refresh F12=Cancel System SYSTEM05 network group ITSCNET added. 108 Backup Recovery and Media Services for OS/400 In our example on SYSTEM05, media information is already present since BRMS/400 is fully implemented. Use the Copy Media Information BRM command (CPYMEDIBRM) to save your media information as follows: CPYMEDIBRM OPTION(*TOFILE) This copies the contents of the media inventory file to a temporary file (QA1AMED) or a file name that you can designate. This temporary file is created in your Current library. Using the CPYMEDI parameter, you can also choose if you want to copy media information. The default is *NO and should be used unless you are planning on restoring media information to a non-networked system. Note: This step is not required if you have a new system with only BRMS/400 installed with no media information and you are planning to add the system to the BRMS/400 network. 10.You are now ready to synchronize SYSTEM09 with SYSTEM05. On SYSTEM05, enter the following command: INZBRM OPTION(*NETSYS) FROMSYS(SYSTEM09) The media management files on the inactive system (SYSTEM05) are cleared during the copy process and replaced with the network media management files. Before clearing the media management files, you are notified when the SYSTEM05 files are overwritten with files coming from SYSTEM09 as shown in Figure 78. Figure 78. Running INZBRM *NETSYS on SYSTEM05 The media management files that are copied to the inactive system are: • QA1AMM: Media inventory • QA1AMT: Media class attributes • QA1ACN: Container status inventory Display Program Messages Job 047122/A960103D/QPADEV0001 started on 05/31/00 at 09:15:55 in subsystem Entries exist for Media. (R I C) I Entries exist for Media policy. (R I C) I Entries exist for Media class. (R I C) I Entries exist for Location. (R I C) I Entries exist for Move policy. (R I C) Type reply, press Enter. Reply . . . i F3=Exit F12=Cancel Chapter 5. BRMS/400 networking 109 • QA1ACT: Container class • QA1ASL: Storage locations • QA1AMP: Move policies • QA1A1MP: Move policy entries • QA1AME: Media policy attributes • QA1ARMT: Network group • QA1A1RMT: Remote system name entries • QA1ADXR: Media duplication cross reference If you specified *LIB in the Receive media information field, media content information is synchronized to the system that you are adding. After the network media management files have been copied to the inactive system (SYSTEM05), the status of the inactive system is changed to active, and its media files are now the network media files. On SYSTEM05, select the option to ignore all of the messages by replying with I. These messages indicate that you are about to overwrite files on SYSTEM05. When the system is added to the network, several things happen. First, the media inventory files from the network are copied to SYSTEM05. Second, as shown in Figure 79 on page 110, an entry for SYSTEM09 is automatically created on SYSTEM05 with the status of Active. If you now check the entry for SYSTEM05 that was created on SYSTEM09, you see that this also has a status of Active. It is important to ensure that the user profile QBRMS is not in a *DISABLED state. Communication entries in the Q1ABRMNET subsystem use this user profile. If it is disabled, you cannot establish a DDM connection. During our tests, we noticed that the profile was disabled. A CPF4734 message was logged on the system operator’s messge queue indicating that an evoke function for the QCNDDMF file in the QSYS library device DDMDEVICE was rejected. The SNA error code was X’080F6051, indicating that the security code specified by the source program or the default values supplied by the system are not correct. Upon checking everything, we found that the QBRMS user profile was disabled. We enabled the profile and restarted the INZBRM process. The error was resolved. Hint 110 Backup Recovery and Media Services for OS/400 Figure 79. Network group entry on SYSTEM05 for SYSTEM09 The process of networking the two systems automatically starts a new subsystem, Q1ABRMNET, whose description is found in the QBRM library (Figure 80). An autostart job entry for this subsystem is also added to QSYSWRK on both systems. Figure 80. BRMS/400 networking subsystem: Q1ABRMNET 11.On SYSTEM05, check the system value QDATE, and make any corrections. 12.On SYSTEM09, check the system value QDATE, and make any corrections. 13.On SYSTEM09, issue the Initialize BRMS/400 (INZBRM) command as follows: INZBRM OPTION(*NETTIME) The time of the system that issues the INZBRM command is used to synchronize the rest of the systems in the network group. Alternatively, use option 8 (Set time) from the Change Network Group display to synchronize the times to selected systems within the network group. The selected systems use the time of the issuing system. This option is useful if you just want to synchronize the time of one system, rather than all of the systems. For example, you may want to synchronize the time of a system that was shutdown for maintenance. Usually, you need to reset the time when you perform a manual IPL, or where you are operating in different time zones, and Change Network Group SYSTEM05 ITSCNET Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *NONE *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Receive Opt System Network ID Media Info Status SYSTEM09 ITSCNET *NONE Active Work with Subsystems System: SYSTEM05 Type options, press Enter. 4=End subsystem 5=Display subsystem description 8=Work with subsystem jobs Total -----------Subsystem Pools------------ Opt Subsystem Storage (K) 1 2 3 4 5 6 7 8 9 10 QBATCH 0 2 QCMN 0 2 QCTL 0 2 QINTER 0 2 4 QSERVER 64000 2 5 QSNADS 0 2 QSPL 0 2 3 QSYSWRK 0 2 Q1ABRMNET 0 2 Chapter 5. BRMS/400 networking 111 someone may have entered the “correct” time. You should always synchronize the time with a system operational in a network, rather than from a system that you are about to add to the network. 14.Go to SYSTEM05. You can now merge the media inventory data that was saved prior to adding the system to the network under step 9. Enter the following command on SYSTEM05: CPYMEDIBRM OPTION(*FROMFILE) Note: This step is only necessary on systems that previously had BRMS/400 media inventory. Make sure you change the default from *TOFILE to *FROMFILE. Any media information that is inconsistent with the new network level media information is ignored. All entries that are not duplicates are added to the network media inventory. If duplicate media contains active files, you must keep track of the information. If no active files are present, you should re-initialize the tape with a new volume ID. 15.Enter the WRKMEDBRM command on SYSTEM05. You see the media inventory of SYSTEM09 and SYSTEM05. 16.Enter the WRKMEDBRM command on SYSTEM09. You see the media inventory for SYSTEM05 and SYSTEM09. We strongly recommend that you check on a daily basis to see if your network is operational and that the media information is moving across it. See 5.9, “Verifying the BRMS/400 network” on page 120, for additional information. 5.5 Removing a system from the network group AS/400 systems can be removed from the network group by using the following steps: It is important that network times remain in sychronization and the INZBRM command should be run periodically. Remember that a common media inventory update depends on the fact that a precise chronological sequence of media information is recorded across all systems in the network group. The INZ OPTION(*NETTIME) command ensures that the times match to within five seconds across the systems. Use care if times are synchronized near midnight because the command does not take date into account. Note When the media inventory has been copied back from the temporary file (QA1AMED or a file name that you designate), you need to review common classes for inconsistencies. For example, it is possible that Media Class SAVSYS on one system uses a media density of *QIC120, while the same media class on the other uses *FMT3490E. All media density now belongs to the network class SAVSYS. Note 112 Backup Recovery and Media Services for OS/400 1. On the system being removed from the network group, select option 4 (Remove) for all network entries on the Change Network Group display. This removes all entries from the network group table on the system that is being removed from the network group. When selecting option 4 (Remove), you are transferred to the Confirm Remove of Network Systems display. On this display, you are given the opportunity to remove media entries from this system's media inventory for media belonging to the other systems in the network group. By selecting the value *YES for the Remove media information field, you remove all media entries from this system's media inventory belonging to all systems remaining in the network group. If you select *NO, media entries are not removed from the systems that you are removing. Note: If a system name is displayed as inactive, you should use caution in using the *YES parameter, since it removes all media entries associated with that system name, even if the system name was never an active member of the network group. Another option that you can select is to rename (*RENAME) the media for the systems that you are removing. The media is renamed to the system name of the system that you are currently using. In the following example, shown in Figure 81 and Figure 82, SYSTEM01 and SYSTEM02 are renamed to SYSTEM03, which is the system that you are currently using. Figure 81. Change Network Group Figure 82. Removing systems from a network group 2. On any system remaining in the network group, select option 4 (Remove) for the system name being removed from the network group on the Change Network Group display. This removes the system name from all systems Change Network Group SYSTEM03 ITSCNET Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *LIB *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Receive Opt System Network ID Media Info Status 4 SYSTEM01 ITSCNET *NONE Active 4 SYSTEM02 ITSCNET *NONE Active SYSTEM04 ITSCNET *NONE Active Confirm Remove of Network Systems SYSTEM03 ITSCNET Press Enter to confirm your choices for 4=Remove. Press F12 to return to change your choices. Remove media . . . . . . . . . *RENAME *YES, *NO, *RENAME Remote Receive Opt System Network ID Media Inf Status 4 SYSTEM01 ITSCNET *NONE Active 4 SYSTEM02 ITSCNET *NONE Active Chapter 5. BRMS/400 networking 113 remaining in the network group. When selecting option 4 (Remove), you are transferred to the Confirm Remove of Network systems display. You should select *YES for the Remove media field. The system is removed completely from the network. 5.6 Changing the system name Renaming a system is not a task that is undertaken lightly. Many definitions may depend on the system name, not the least of which are PC networking definitions and the system directory. You must consult your network support personnel to resolve issues related to configuration objects. Implied in a system name change is a default local location, LCLLOCNAME, name change and, therefore, a change for BRMS/400. When this happens, BRMS/400 needs to perform the following actions: • Update the network to remove the old system name, and add the new system name. • Transfer all of the media previously owned by the old system name to the new system name. When changing the system name using BRMS/400, you need to check two items: • Is your system running under V3R1? • Is your system running under V3R2, V3R6, or later? Information on changing system names for systems that are at V2R3 or V3R0.5 is documented in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for V3R7. Please note that the V2R3 and V3R0.5 releases are no longer supported by IBM. 5.6.1 Changing the system name on V3R1 If you decide to change the system name that is currently operating at a V3R1 level, you need to perform the tasks that are listed in this section. In our discussion of the following example, we assume that the system name and local location name identical and you are planning to change the system name and local location name from OLDSYSN to NEWSYSN. We discuss other options at the end of these steps. Note: You may need to perform additional steps when you change your system name or network ID. The steps described here are required for BRMS/400 only. 1. Save library QUSRBRM on OLDSYSN. 2. If OLDSYSN is part of the BRMS/400 network, remove all the networked systems from OLDSYSN using the following steps. Otherwise, go to step 3. a. On OLDSYSN, enter: GO BRMSYSPCY b. Select option 4 (Change Network Group). c. Enter option 4 next to all network systems from the Change Network Group display. d. On the Confirm Remove of Network display, select *YES to remove media information. 114 Backup Recovery and Media Services for OS/400 e. Go to any other system that was in the network. f. Enter: GO BRMSYSPCY g. Select option 4 (Change Network Group). h. Select option 4 to remove OLDSYSN from the Change Network Group display. i. On the Confirm Remove of Network display, select *YES to remove media information. 3. Change the system name on OLDSYSN to NEWSYSN. This keeps the new system name in “pending” status until an IPL is performed. CHGNETA SYSNAME(NEWSYSN) LCLCPNAME(NEWSYSN) LCLLOCNAME(NEWSYSN) 4. Enter INZBRM OPTION(*DATA) on OLDSYSN. This changes the system name for each volume in the media management file (QA1AMM) from OLDSYSN to NEWSYSN according to the following logic: If BRMSYSNAME < or > LCLLOCNAME, and BRMSYSNAME = SYSNAME Then BRMSYSNAME = LCLLOCNAME 5. IPL the system. This changes the system name to NEWSYSN. 6. On NEWSYSN, enter CPYMEDIBRM OPTION(*TOFILE) to protect the media information that is unique to NEWSYSN. You need to add this later. 7. Add NEWSYSN back to the BRMS/400 network. See 5.4, “Adding systems to a network group” on page 104, for additional information. You should not treat NEWSYSN as your master system. 8. On NEWSYSN, enter: INZBRM OPTION(*NETSYS) FROMSYS(name of another system) This copies all of the media information from another system to NEWSYSN. You are prompted to answer several messages (BRM1519). Enter I for all of these messages. 9. From one of the other systems in the network, enter INZBRM OPTION(*NETTIME) to synchronize all the clocks within the network. The reason for selecting another system rather than the one for which you just changed the name is to avoid errors that relate to different time zones. For example, if the system that you are changing the name is in New York and the participating network of other AS/400 systems is in Rochester, Minnesota, you need to be careful when you synchronize the clocks. If you synchronize the clocks from the New York system, all Rochester systems are set to one hour early. You need to decide which time zone you want to use. Then, issue the INZBRM command from the system that is in the time zone you want. 10.On NEWSYSN, enter: CPYMEDIBRM OPTION(*FROMFILE) This appends the media information that was unique to NEWSYSN and synchronizes the information with other systems participating in the BRMS/400 network. 11.Use the WRKMEDBRM command to check media information. If you have a system name different than your local location name (such as SYSTEM01, LOCA), and you want to change both of these to new values (such Chapter 5. BRMS/400 networking 115 as SYSTEM02, LOCB), you must first change the system name alone (SYSTEM01 to SYSTEM02) and IPL the system. After the IPL, you should follow all of the steps starting at step 3 in this example. 5.6.2 Changing the system name on V3R2, V3R6, or V3R7 Beginning with V3R2 and V3R6, renaming a system name or network ID can be done automatically. Use the following steps to change the system name: 1. Change the system name and IPL. 2. Ensure that there is no BRMS/400 activity and that you have a latest save of QUSRBRM library. 3. On the system where you just changed the name, enter: GO BRMSYSPCY 4. Select option 4 (Change Network Group). On the top right corner of the Change Network Group display, you see your new system name as shown in Figure 83. Figure 83. Removing the old system name from the network 5. Select option 4 to remove the old entry (OLDSYSN). After you change the system name and IPL the system, you must ensure that you change the BRMS/400 network immediately. The BRMS/400 media files still have not been updated to reflect the system name change. The BRMS/400 media volumes are still owned by the old system name. In addition, the other systems in BRMS/400 network still try to communicate with the old system name because they are not yet aware of the rename. To avoid any missing information in the shared media inventory data, you must change the BRMS/400 network immediately after the system IPL. Also, make sure that no BRMS/400 activity occurs between the IPL and adding your system to the BRMS/400 network. Important Change Network Group NEWSYSN APPN Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *LIB *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Receive Opt System Network ID Media Info Status _ SYSTEM01 APPN *NONE Active 4 OLDSYSN APPN *NONE Inactive _ SYSTEM03 *LOC *NONE Active 116 Backup Recovery and Media Services for OS/400 6. On the Confirm Remove of Network Systems display, specify *RENAME on the Remove media field so that ownership of the media inventory is transferred from OLDSYSN to NEWSYSN as shown in Figure 84. Figure 84. Confirm Remove of Network Systems 5.6.3 Other scenarios that involve a system name change Besides changing the system names when you have a system in the BRMS/400 network, there are other scenarios that also require similar steps as those previously described. We look at two example scenarios in the following sections. 5.6.3.1 Example 1 You have a system that is at V3R1 or later. You are going to move to a new system that has a new name. Let us assume that you are going from a V3R1 to a V3R2 system. The steps outlined here relate to BRMS/400 only, and not for a complete migration to V3R2. 1. On your V3R1 system, save the QUSRBRM library. 2. On your V3R2 system, complete these steps: a. Delete the BRMS/400 (5763BR1) licensed program. You do not need to do this if you are not changing an OS/400 release. b. Restore QUSRBRM. c. Use the RSTLICPGM command to restore the BRMS/400 licensed program if it is not already installed. The RSTLICPGM process performs any file conversions that may be required in the QUSRBRM library. File conversions generally involve adding new fields to database files or adding new files or data areas. File conversions can only happen when you are installing a licensed program. A restore operation does not perform any file conversions. d. You see that the old system name is still shown on the Change Network Group display. e. Select option 4 to remove the old system name from the network (although you really do not have a network). f. On the Confirm Remove of Network Systems display, select the option to *RENAME the media. This renames all your media information from your old system name to the new system name. g. Use the WRKMEDBRM command to check your media information. Confirm Remove of Network Systems NEWSYSN APPN Press Enter to confirm your choices for 4=Remove. Press F12 to return to change your choices. Remove media . . . . . . . . . *RENAME *YES, *NO, *RENAME Remote Receive Opt System Network ID Media Inf Status 4 OLDSYSN APPN *NONE Inactive Chapter 5. BRMS/400 networking 117 5.6.3.2 Example 2 Another example is where you have two CISC processors and you want to merge them to a single RISC processor. Assume that you have SYSTEM01 and SYSTEM02 as your CISC processors. Your RISC processor is called SYSTEM01. You can also have two CISC processors merging to a single CISC processor. The following steps are outlined for the tasks that need to be carried out on the CISC processors and the RISC processors. Steps for your CISC processors Follow these steps for the CISC processors: 1. Ensure you have a full save of QUSRBRM for SYSTEM01 and SYSTEM02. 2. Break up the network group. Remove SYSTEM01 in the network from SYSTEM02 along with the media information. 3. Remove SYSTEM02 from SYSTEM01 along with the media information. This way, both systems now have their own media information. 4. Save library QUSRBRM and QBRM on SYSTEM01. 5. Use the following command to copy to a database file on SYSTEM02 and save this file separately to a tape: CPYMEDIBRM OPTION(*TOFILE) CPYMEDI(*YES) If you use the defaults, the data is saved in the QA1AMED file in the QGPL library. Save this object to a tape. Steps for your RISC processors Complete these steps for your RISC processors: 1. Follow the instructions in AS/400 Road Map for Changing to PowerPC Technology, SA41-4150, to install your new RISC processor. This is called SYSTEM01. 2. If BRMS/400 is already installed, use the DLTLICPGM command to delete BRMS/400. 3. Restore the QUSRBRM and QBRM libraries from your SYSTEM01 (CISC) backups. 4. Run the Start Object Conversion (STROBJCVN) command for the QUSRBRM and QBRM libraries. This step is required as part of your upgrade from CISC to RISC. The STROBJCVN command is part of your RISC operating system. 5. Restore (RSTLICPGM) 5716BR1 from your distribution tapes. This performs any conversions for file layouts that are required in library QUSRBRM when going to the RISC operating system. 6. Apply the latest BRMS/400 PTFs on your RISC system. 7. At this point, you already have media information for your original SYSTEM01 (CISC processor). You need to add the media information from SYSTEM02. Restore the QA1AMED file from the QGPL library that you saved earlier. 8. Append the media information of SYSTEM02 to SYSTEM01 by using the command: CPYMEDIBRM OPTION(*FROMFILE) FILE(QGPL/QA1AMED) Ensure that you change the defaults for the CPYMEDIBRM command to *FROMFILE. 118 Backup Recovery and Media Services for OS/400 9. Use the WRKMEDBRM command to check your media inventory. 5.7 Joining two BRMS/400 networks When you have more than one BRMS/400 network group and you want to create a single network group, you must carefully plan how to do this. When you plan to join two networks, you must not do this by adding one system from one network to another network. Figure 85 has two BRMS/400 networks called NETWORK1 and NETWORK2. It illustrates the wrong way to join two BRMS/400 networks. Figure 85. Incorrect way of joining two BRMS/400 networks As shown in the example in Figure 85, SYSTEM1 from NETWORK2 is networked to SYSTEMA in NETWORK1. With this approach, SYSTEM2 remains unknown to all of the systems in NETWORK1. This is because SYSTEM1's knowledge of SYSTEM2's existence is erased when you run the INZBRM OPTION(*NETSYS) command on SYSTEM1. Therefore, you must split one of the networks before joining them so that all of the systems in the network have knowledge of each other. Figure 86 illustrates the correct way to join two BRMS/400 networks. Figure 86. Correct way to join the BRMS/400 network INZBRM *NETSYS on SYSTEM1 add SYSTEM1 on SYSTEMA Network 1 Network 2 SYSTEMA SYSTEMC SYSTEMB SYSTEM2 SYSTEM1 SYSTEM2 SYSTEM1 NETWORK 2 SYSTEM2 SYSTEM1 Break NETWORK 2 INZBRM *NETSYS add SYSTEM1 & SYSTEM2 on SYSTEMA Network 1 SYSTEMA SYSTEMC SYSTEMB SYSTEM1 SYSTEM2 INZBRM *NETSYS Chapter 5. BRMS/400 networking 119 As illustrated in Figure 86, the first step is to break the two systems apart and add them in the network. Here is an overview of what you need to do: 1. Remove all of the entries on the Change Network Group display on SYSTEM1 for SYSTEM2, including its media information. 2. Remove all of the entries on the Change Network Group display on SYSTEM2 for SYSTEM1, including its media information. 3. To save the media information for both systems, on SYSTEM1 and SYSTEM2, enter: CPYMEDIBRM OPTION(*TOFILE) CPYMEDI(*YES) 4. Add SYSTEM1 on any system in NETWORK1 using the Change Network Group option. In our example, we used SYSTEMA to add SYSTEM1. 5. On SYST_1, enter: INZBRM OPTION(*NETSYS) FROMSYS(SYSTEMA) This overwrites the media information files on SYSTEM1 from SYSTEMA. 6. On SYSTEM1, to synchronize the clocks for both systems based on the time on SYSTEMA, enter: INZBRM OPTION(*NETTIME) FROMSYS(SYSTEMA) 7. On SYSTEM1, to append SYSTEM1's media information, enter: CPYMEDIBRM OPTION(*FROMFILE) This synchronizes the media information of SYSTEM1 on all other AS/400 systems within the same network. You receive several messages when the files are overwritten. Reply with an I. 8. On SYSTEM1, use the WRKMEDBRM command to check the media information. 9. Repeat steps 4, 5, 6, 7, and 8 for SYSTEM2 by substituting the name of SYSTEM1 with SYSTEM2 in these steps. 5.8 Copying control groups between networked AS/400 systems Beginning with V3R1, you have the opportunity to specify whether you want to copy the control groups on your own system or send the information to other systems in the BRMS/400 network. The default when you copy the control group is *LCL, which means you are copying the control group to another name on your local system. You can specify a remote system name and the network identifier for the remote system. This copies the control group to the target system that you specified. BRMS/400 uses DDM to copy the information across to the QA1ACM file. You may find this facility useful, but be aware of some of the limitations. 120 Backup Recovery and Media Services for OS/400 We, therefore, recommend that you always review the control group even after the copy. You may need to tailor the values based on the operational requirements for that particular system. 5.9 Verifying the BRMS/400 network It is vital that you check the accuracy of the shared data and, as a consequence, to check that the data exchange is working properly. Checking for the communications link between systems (line descriptions, control descriptions) alone is not enough. This does not guarantee that the BRMS/400 media inventory between all of your AS/400 systems is synchronized. You must check daily on the Change Network Group display that the participating AS/400 systems are in an Active status. Another easy way to check for media synchronization is to implement the following steps: 1. On a system in the BRMS/400 network, create a dummy media class NETCHK (for Network Checking). Because it is never used for real backups, there are no particular parameters to specify. You can use the defaults. 2. On each system (SYSTEMxx, where xx is the name of the system), type: ADDMEDBRM VOL(SYSxx) MEDCLS(NETCHK) 3. Every morning, on each system in your BRMS/400 network, use the job scheduler to run the CL command: RMVMEDBRM VOL(SYSxx) MEDCLS(NETCHK) DLYJOB DLY(300) ADDMEDBRM VOL(SYSxx) MEDCLS(NETCHK) • Control group attributes are not copied across to the target system. These attributes revert to the system defaults. With V3R7, the subsystems to process and the job queues that you want to process as part of the control group are copied across provided that the copy command is issued from a V3R7 system. This support is not available on releases prior to V3R7. • The entries in the control group are copied across, but lists are not. If the entry in the control group is a list, you have to manually create the backup list on the target system in order for the control group to work successfully. Use the WRKLBRM command to create any missing backup lists. • You are not provided with a warning message at the time of the copy to inform you that your control group has invalid data if it is run on the new system (for example, unknown library). You may have to remove some backup items that are not supported on the target system. For example, V3R7 supports the save of integrated file system through using *LINK value for the backup items. When you copy the control group to a V3R1 system, the *LINK value is not supported. You have to edit the control group to make the changes. • The control group text is not copied across. You have to manually add the text on the target system. Watch out for: Chapter 5. BRMS/400 networking 121 Once the CL command is submitted, your media should have a creation date equal to the current date. This should be true on the system that has run the command. If not, it means that the CL command has not been submitted and you should check the job log. The other systems in the BRMS/400 network should also have the current date as the creation date for this media. If not, it means that the update has not been correctly sent between the systems. We created a small CL program with the CL command and submitted this as a remote job on each of the AS/400 systems using OS/400 job scheduler. If you use this method, you should also check the activities of the job scheduler. Assuming that today's date is 06 July 2000, the WRKMEDBRM command for each system should display the information shown in Figure 87. Figure 87. Media update to check the network If you see the information shown in Figure 88, you can conclude that SYSTEM01 did not receive the SYS04 media update. Figure 88. No update for SYS04 Work with Media SYSTEM01 Position to . . . . . . Starting characters Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 6=Work with media set 7=Expire 8=Move 10=Reinitialize ... Volume Creation Expiration Move Media Dup Opt Serial Expired Date Date Location Date Class Sts xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx SYS01 *YES 07/06/00 *NONE *HOME *NONE NETCHK SYS02 *YES 07/06/00 *NONE *HOME *NONE NETCHK SYS03 *YES 07/06/00 *NONE *HOME *NONE NETCHK SYS04 *YES 07/06/00 *NONE *HOME *NONE NETCHK xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx Work with Media >>>> SYSTEM01 Position to . . . . . . Starting characters Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 6=Work with media set 7=Expire 8=Move 10=Reinitialize ... Volume Creation Expiration Move Media Dup Opt Serial Expired Date Date Location Date Class Sts xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx SYS01 *YES 07/06/00 *NONE *HOME *NONE NETCHK SYS02 *YES 07/06/00 *NONE *HOME *NONE NETCHK SYS03 *YES 07/06/00 *NONE *HOME *NONE NETCHK >>>> SYS04 *YES 07/04/00 *NONE *HOME *NONE NETCHK xxxxxx xx/xx/xx *NONE xxxxxxxx xx/xx/xx xxxxxxx 122 Backup Recovery and Media Services for OS/400 A possible explanation is that communications worked on July 4, but a subsequent problem occurred. Note: Because communications are often subject to failure or disruption, it is worth using one media per system to have a safe BRMS/400 network. This ensures consistent data between the systems at least once during the day (since a network failure may occur after the successful updates). Check the BRM log since messages are sent to the log if BRMS/400 encounters update problems. © Copyright IBM Corp. 1997, 2001 123 Chapter 6. Saving and restoring the integrated file system This chapter discusses the commands and procedures that are necessary to set up when saving the integrated file system (IFS) with BRMS/400. It provides examples to help you develop and carry out your backup strategy. We chose the LAN Server/400 environment as our base to discuss the integrated file system concepts and the additional steps that you need to consider when defining your overall save and restore strategy. Other environments, such as the Integration of Lotus Notes on AS/400 and the Integration of Novell NetWare on AS/400, are not discussed in any detail in this chapter. However, the concept of saving these solutions using the integrated file system remains the same. The Integration of Lotus Notes on AS/400 uses a separate approach to perform the overall saves. For the server data, it uses the ADSTAR Distributed Storage Management (ADSM) product for backup and recovery purposes. This allows the Lotus Notes Administrator to perform a save and restore operation on individual documents and folders within the Lotus Notes database. For additional information, see the following list of publications that address the save and restore requirements for the Integration of Lotus Notes on the AS/400 system: • OS/400 Integration of Lotus Notes, SC41-3431 • Setting Up and Implementing ADSTAR Distributed Storage Manager/400, GG24-4460 • Using ADSM to Back Up Lotus Notes, SG24-4534 • Backup and Recovery - Basic, SC41-4304 The Integration of Novell NetWare on AS/400 requires an approach that is similar to saving and restoring the LAN Server/400 environment. Novell NetWare has its own backup and recovery solution that is widely used and popular among their users, such as the solutions offered by ARCserve and SBackup. These solutions save and restore Novell NetWare server data only. The licensed program library, network server descriptions, and storage spaces are still saved by OS/400 using the appropriate commands. For additional information on save and restore for Novell NetWare, see Integrating AS/400 with Novell NetWare, SC41-4124. We recommend that you obtain the appropriate Informational APARs related to Lotus Notes and Novell NetWare integration for important information. 6.1 Overview of IFS The integrated file system (IFS) design allows you to save files from: • Other Integrated PC Servers • OS/2 LAN server systems The files can reside on the local Integrated PC Server (formerly known as the FSIOP) or on a remote server. This ability makes the AS/400 system a powerful part of the domain. You can use the AS/400 system to save the server data from any server system in the domain such as LAN Server/400. To save or restore the IFS data, you have to use the Save Object (SAV) command and the Restore Object (RST) command on the AS/400 system. Using these commands, you can save or restore the entire integrated file system. As such, 124 Backup Recovery and Media Services for OS/400 OS/400 is now part of the integrated file system (QSYS.LIB) along with folders and documents (QDLS). Because these two file systems and their associated objects are saved by native OS/400 commands, they have to be omitted from the IFS save and restore process. For example, the QSYS file system is saved by such commands as SAVSYS and SAVLIB. The QDLS file system is saved by the SAVDLO command. For additional information on how to save and restore using the SAV and RST commands, see the Backup and Recovery - Basic, SC41-4304. For information on IFS, see Integrated File System Introduction, SC41-5711. A major benefit of the Integrated PC Server and LAN Server/400 is the ability to include your LAN Server/400 backup procedure into your AS/400 backup procedure. However, when you use the Integrated PC Server, you create additional objects on the AS/400 system that need to be saved outside the control of the SAV and RST command. They are: • Configuration objects associated with the Integrated PC Server • Licensed program product libraries • Network server storage space associated with the network description • Storage spaces shared by all Integrated PC Servers on the AS/400 system When you design a solution to save IFS, especially when you have the Integrated PC Server, you need to consider how you back up the objects in the above list. We use the LAN Server/400 environment as an example to define the save and restore strategy for IFS using BRMS/400. The rest of this chapter is divided into two parts. The first part contains an overview of the different components that are created when you use the Integrated PC Server for LAN Server/400. It discusses the importance of ensuring that you have proper authority to save and restore information. It also discusses the performance implications on your save and restore operation when you use the Integrated PC Server. The second part of this chapter addresses how you can use BRMS/400 to save and restore various objects that are created when you have the Integrated PC Server installed and configured on your system to be used by LAN Server/400. 6.2 Planning for saving IFS directories To develop a save and restore strategy, you must decide what to save and how often to save. Before you decide on a strategy, you must understand the LAN Server/400 objects and their contents. 6.2.1 Storage spaces The Integrated PC Server does not have its own disks. It uses AS/400 storage space for storing client data and sharing network files. A storage space is the AS/400 storage allocated for use by the Integrated PC Server. You can allocate up to 8000 MB of storage for each storage space you create. You then link the storage space as a PC drive to the server that runs on the Integrated PC Server. Storage spaces contain your LAN data. You may decide to create two or more storages spaces for each network server description. In this way, you can store data, such as PC programs, that does not change often in one storage space. And you can store user data that changes often in another storage space. Now, Chapter 6. Saving and restoring the integrated file system 125 you can save the user data more often and possibly save the PC program data only when you save the entire AS/400 system. When we discuss backup and restore procedures, we have to make a distinction between the two kinds of storage spaces that the Integrated PC Server uses. 6.2.1.1 Server storage space These storage spaces are in file allocation table (FAT) format. They are created when you create a network server description. They contain licensed programs and system files such as OS/2 code, LAN Server code, Integrated PC Server device drivers, Integrated PC Server administration applications, CONFIG.SYS, NET.ACC, SWAPPER.DAT, and dump files. This server storage space takes about 80 MB of disk storage per configured Integrated PC Server. Save and restore for the server storage spaces can be done using the SAVOBJBRM command and the RSTOBJBRM command, or through the SAVLICPGM command and the RSTLICPGM command. These storage spaces are stored in library QUSRSYS and QXZ1. 6.2.1.2 Network server storage space (storage spaces) These storage spaces are created and used by the LAN Server/400 administrator (usually, but can be created by users). They hold the directories and files that make up the entire High Performance File System (386 HPFS) disk volume. Network server storage spaces are often simply called “storage spaces”. The server storage spaces are often referred to as the C: drive, D: drive, and E: drive. Throughout this chapter, the term “storage space” refers to network server storage spaces unless indicated otherwise. 6.2.2 LAN Server/400 structure You can find pieces of LAN Server/400 in several parts of the AS/400 system. They are explained in the following sections. Library QXZ1 QXZ1 holds a number of objects, three of which are the base from which the AS/400 system creates network server descriptions when the CRTNWSD command is used. These storage spaces are: • QFPHSYS1 • QFPHSYS2 • QFPHSYS3 Library QUSRSYS This is where the disk images for each network server description are stored. You find that there are two “server storage” areas for each network server description that is created. For the network server description “SRVLS40A” shown in our example in Figure 89 on page 126, the two server storage spaces are called SRVLS40A1.SVRSTG (also referred to as your C: drive) and SRVLS40A3.SVRSTG (also referred to as your E: drive). The names consist of the server name followed by a suffix of 1 or 3. SRVLS40A1 holds the files and programs to boot the Integrated PC Server. SRVLS40A3 holds the domain control database information. 126 Backup Recovery and Media Services for OS/400 Figure 89. LAN Server/400 Integrated PC Server objects Library QSYS29nn (29nn is a language number) This library contains licensed system code for secondary languages. This library holds the national language versions of the code that is stored in QXZ1. It contains two objects: QFPHSYS2.SVRSTG and QFPHSYS3.SVRSTG (note that QFPHSYS1.SVRSTG does not have an NLS version). Integrated File System directory /QFPNWSSTG /QFPNWSSTG holds the storage spaces that you link to the network server descriptions. In this view, the storage spaces are seen as solid blocks of data; there is no way to see individual files or directories. Integrated File System directory /QLANSrv file system This directory contains the LAN Server/400 file system as a hierarchical structure of directories and files that can be saved and restored individually or in groups. Integrated File System directory /QLANSrvSR (V3R1) This directory is a temporary storage area for files that are in the process of being saved to AS/400 tapes. The QLANSrvSR directory exists on an AS/400 system running V3R6 or later, or V3R2 only if you upgraded your AS/400 system from V3R1. OS/400 does not use the directory in either V3R6, V3R7, or V3R2. / (Root) QSYS.LIB QFPNWSSTG QXZ1.LIB QSYS.29nn QUSRSYS.LIB QFPHSYS1.SVRSTG QFPHSYS2.SVRSTG QFPHSYS3.SVRSTG (D drive read only) Secondary language version SRVLS40A1. SVRSTG SRVLS40A3. SVRSTG (C drive read only) (E drive ) QLANSrvSR QLANSrv KDRIVE NWS APPL MYFILE FILE (Permanent alias) Directory for temporary copies of LS/400 files during a SAV/RST command disk1 (storage space, currently linked to SRVLS40A as K:) disk2 (storage space, currently linked to SRVLS40A as L:) disk3 (storage space, currently linked to SRVLS40A as M:) KDRIVE SRVLS40A SRV0S2A DSK (Temporary alias) X M L APPL MYFILE FILE DSK A D C B (Drives on remote server) Chapter 6. Saving and restoring the integrated file system 127 6.2.3 Memory requirements for save and restore The QLANSrv is sensitive to the size of the AS/400 memory pool. To achieve acceptable save performance, you must have at least 15 MB of main storage in the pool that your save and restore job is running. For additional information on how to calculate memory pool, see LAN Server/400 Administration (part of the IBM Online Library SK2T-2171) or Informational APAR II09313. Please ensure that you are not affecting other operations on the system by taking memory for the save operation. Other factors, such as the size of your files, tape speed, disk arms, and your processor feature also influence the speed at which the AS/400 system can save or restore your data. For more information on managing system activities, see Work Management, SC21-8078. If less than the recommended memory is allocated to the pool where the save is running, the save operation may take significantly longer. Also, if there is other work being run in the pool, you may have to increase the pool size. 6.2.4 Authority to save IFS directories Many organizations allow users to back up the system (or certain components of it) and give those users *SAVSYS authority. The LAN Server/400 environment works differently from the standard AS/400 system, so users performing the backup and restore operation for LAN Server/400 may need additional authority. To properly back up LAN Server/400, three types of data should be saved, and each type has authority requirements. The three types are: • AS/400 configuration information: AS/400 configuration information may be saved using the SAVCFG command. In BRMS/400, this is saved using *BKUGRP or *SYSGRP in the control group with *SAVCFG as a backup item. Users only need *SAVSYS authority to use this command. • OS/2 LAN Server configuration information: OS/2 LAN Server configuration information is kept in a server storage space called the E: drive, (SRVLS40A3.SVRSTG in our example shown in Figure 89). The E: drive contains important information for the LAN Server code that runs on the Integrated PC Server, such as the domain control database and the NET.ACC file. This server storage space resides in library QUSRSYS, and its name is the same as the network server description with a suffix of 3. Once again, *SAVSYS authority is sufficient to save this object. For example, to save the E: drive for server SRVLS40A through BRMS/400, use the following command: SAVOBJBRM LIB(QUSRSYS) OBJ(SRVLS40A3) DEV(*MEDCLS) MEDPCY(*SYSPCY) BRMS/400 control group (*BKUGRP) can also be used with a backup item of *ALLUSR to save objects in the QUSRSYS library. • User data: There are two ways to save user data (as complete storage spaces or as individual files). Saving the user data as complete storage spaces is good for disaster recovery, but files cannot be restored individually. You have to restore the entire storage space or nothing. Saving data as individual files is slower, but the advantage here is that you can restore directories or files individually. Users who only save user data as complete storage spaces need only have *SAVSYS authority to execute the SAV command: SAV DEV('/qsys.lib/tap01.devd;) OBJ(('/qfpnwsstg/srvls40a1')) 128 Backup Recovery and Media Services for OS/400 In BRMS/400, you must add a list entry using the Work with Lists using BRM (WRKLBRM) command and add a link type of *LNK using the backup control group as shown in Figure 90. We discuss link lists in more detail later in this chapter. Figure 90. Change Link List 6.2.4.1 How authority is implemented with LAN Server/400 When you first create a network server storage description, you have to specify a group profile name in the Group profile parameter. The default is *ALL, which means all user profiles on the system are automatically registered on the LAN Server. This can become a security exposure because you may not want everyone in your organization to have the privileges of a LAN administrator. Some of the most common errors that you may come across are related to improper authorities that you may or may not have on your user profiles when saving IFS information. A common occurrence is when you do not have *ALLOBJ special authority to save IFS information or when you are not registered as a LAN administrator. For example, if you belong to the *SYSOPR user class and are not part of the Integrated PC Server group profile and you are not enrolled to the LAN Server as an administrator, when you try to save IFS directories, you see messages similar to those shown in Figure 91. Change Link List (CHGLNKLBRM) Type choices, press Enter. List . . . . . . . . . . . . . . > STGLNK Character value Objects: Name . . . . . . . . . . . . . '/qfpnwsstg/srvls40a1' Include or omit . . . . . . . *INCLUDE *INCLUDE, *OMIT + for more values Directory subtree . . . . . . . *ALL *ALL, *DIR, *OBJ Text . . . . . . . . . . . . . . > 'Storage Link for SRVLS40A' We recommend that you create a group profile for your Integrated PC Server (for example, FSIOP) and make the users part of the FSIOP group profile. This way, only users that belong to the FSIOP group profile are registered on the LAN server. If you want to perform backup and restore functions on individual files, you must have administrator authority on the LAN server to ensure all user data is saved. If the user does not have administrator authority on the LAN server, they can only save and restore files to which they have authority. You must remember to change the QBRMS user profile to have the correct group profile. Hint Chapter 6. Saving and restoring the integrated file system 129 Figure 91. Examples of authority issues with IFS Notice in example 2 in Figure 91 that the backup control group within BRMS/400 starts and completes successfully. However, the IFS information has not been saved because the user OPER1 was not authorized to the system. We now look at the ways in which authority to IFS information can be granted so that the save and restore functions can be completed. Our examples are based on using IFS with LAN Server/400 with an Integrated PC Server. At the time this redbook was written, we were unable to perform our tests using the Integration of Novell NetWare for OS/400. Example 1 - Save of IFS directory using SAV command --------------------------------------------------------- SAV DEV('/qsys.lib/davea.lib/ifs.file') OBJ(('qlansrv/nws/itsosrv9/dsk/k/ cid/*')) NetBIOS error on session with LAN Server ITSOSRV9. Error exchanging security information for user OPER1 on LAN Server ITSOSRV9. Object not found. No objects saved or restored. Example 2 - Save of IFS directory with BRMS/400 ----------------------------------------------------- Begin processing for control group ADTEST type *BKU. Interactive users are allowed to stay active. Starting save of list ADTEST to save file. Error exchanging security information for user OPER1 on LAN Server ITSOSRV9. Object not found. No objects saved or restored. Starting save of BRM media information at level *OBJ to device *SAVF. Member QA1AOBJ added to output file QA1AOBJ in library QTEMP. 12 objects saved from library QUSRBRM. Save of BRM media information at level *OBJ complete. Control group ADTEST type *BKU processing is complete. Description of MSGID CPDA434 --------------------------------- Message ID . . . . . . : CPDA434 Severity . . . . . . . : 10 Message type . . . . . : Diagnostic Date sent . . . . . . : 07/01/00 Time sent . . . . . . : 23:14:45 Message . . . . : Error exchanging security information for user OPER1 on LAN Server ITSOSRV9. Cause . . . . . : The LAN Server/400 file system has encountered an error when authenticating a user with a LAN Server. Recovery . . . : Ensure the following: - The user is enrolled in the LAN Server domain containing this LAN Server. - The AS/400 password matches the LAN Server password. - The user is enabled in the domain. If the domain controller for this domain is an AS/400 network server, ensure that the user's password on that AS/400 is the same as the password on the local AS/400. When you create a user profile, it is important to understand that the LAN server can only accept an 8-character user profile, where the AS/400 system can accept a 10-character user profile. Remember 130 Backup Recovery and Media Services for OS/400 6.2.4.2 Granting appropriate authority to users There are three ways to grant the authority needed to save and restore all LAN server files: • Give *ALLOBJ special authority to the user profile performing the save or restore operation. Even if the user profile has *SAVSYS special authority, you still need to grant *ALLOBJ authority to perform the save and restore operations without authority problems. For example, you can have a user profile of IFSOPER created or changed as follows: CRTUSRPRF or CHGUSRPRF USRPRF(OPER1) SPCAUT(*ALLOBJ)1 GRPPRF(FSIOP) 2 In this example, both the FSIOP group profile and the OPER1 user profile were of the *USER user class. However, when the OPER1 profile was enrolled in the LAN server, it contained ADMIN privileges, which are required to perform the save and restore functions of the LAN Server/400 information. The reason the LAN server enrolled the user with ADMIN privileges is due to the fact that the user contained *ALLOBJ special authority. • Grant OPER1 *ALL authority to the files to be saved. Unfortunately, there is no way to grant authority to all files in all sub-directories. Authority may only be granted to one sub-directory at a time, for example: CHGAUT OBJ('/QLANSrv/NWS/SRVLS40A/DSK/K/*') + USER(username) OBJAUT(*ALL) CHGAUT OBJ('/QLANSrv/NWS/SRVLS40A/DSK/K/MYDIR1/*') + USER(username) OBJAUT(*ALL) CHGAUT OBJ('/QLANSrv/NWS/SRVLS40A/DSK/K/MYDIR1/SUB1/*') + USER(username) OBJAUT(*ALL) • The third approach is to grant the user LAN administrator authority without actually granting the *ALLOBJ authority on the AS/400 system by using the Submit Network Server (SBMNWSCMD) command to grant the authority only in the LAN server as follows: SBMNWSCMD CMD('NET USER OPER1 password /ADD /PRIV:ADMIN') + SERVER(SRVLS40A) You can check the authorization by using the SBMNWSCMD command as follows: 1 Special authority *ALLOBJ is required for saving IFS directories. 2 The group profile specified here is the name of the group profile that you used when you created the network server description. If you did not select any group profiles, you do not have to select this parameter. The user is automatically enrolled on the LAN server. Notes This method grants the user ADMIN privileges on the LAN server as well as gives them authority to all objects on the AS/400 system. For that reason, it may be undesirable from a security standpoint to use this approach. Note Chapter 6. Saving and restoring the integrated file system 131 SBMNWSCMD CMD('NET USER OPER1') SERVER(SRVLS40A) You can see the results on the AS/400 command line (with detailed messages) as follows: Full Name Comment User's comment Parameters Country code 000 (System Default) ====>> Privilege level ADMIN <<====== Operator privileges None Account active Yes Account expires Never Password last set 07-01-00 07:00PM Password expires 08-01-00 07:00PM Password changeable 07-01-00 07:00PM Password required Yes User may change password Yes Requesters allowed All Maximum disk space Unlimited Domain controller Any Logon script Home directory Last logon Never Logon hours allowed All Group memberships *ADMINS The command completed successfully. Command submitted to network server SRVLS40A. With this approach, you do not need to grant *ALLOBJ special authority to OPER1 user profile or enroll the user profile to the FSIOP group. 6.2.4.3 Special authority information You may be familiar with saving other types of AS/400 objects. You should be aware that authority information for LAN Server for OS/400 objects is saved with the object rather than separately. Using the SAVSECDTA command does not save authority information for the Server for OS/400 file system. *SAVSYS special authority specified on an AS/400 user profile does not have any effect on the ability to save or restore objects using the LAN Server for OS/400 file system. 6.2.4.4 Users with password *NONE No matter what authority users have on the AS/400 system, if their password is *NONE, they are downloaded to the LAN server in an inactive state when you create the network server description with the GRPPRF(*ALL) default option. Therefore, such user profiles cannot save or restore the QLANSrv file system. The password must be reset from the AS/400 system before this user can log on to the LAN or gain access to the QLANSrv file system to perform save and restore operations. 6.2.4.5 Group profiles The AS/400 system allows group profiles to sign on. OS/2 LAN server does not. Therefore, if a user ID, such as QSECOFR, is migrated as a group to the LAN server, that user profile cannot sign on and cannot perform save or restore This authority is overwritten if the user profile is changed on the AS/400 system. You must run the SBMNWSCMD command again each time the profile changes when you vary on the Integrated PC Server. Note 132 Backup Recovery and Media Services for OS/400 operations. Ensure that whichever user is to perform these operations is not using a user ID that is considered a group profile on the LAN server. To determine which profiles on the LAN are considered group profiles, use the following command: SBMNWSCMD CMD ('net group') SERVER(yoursvr) If you find QSECOFR in the list, change its profile to be a member of the group that is downloaded to the LAN server if this user ID is to perform save or restore operations. 6.2.4.6 Which job saves QLANSrv files? Saves in the integrated file system are performed by the user profile of the job and not by QSECOFR. Therefore, any user who has administration authority to the LAN can perform save operations provided that user has *ALLOBJ authority on the AS/400 system. 6.2.5 Restricted state To save Integrated PC Server data in a restricted state, your AS/400 system must have either a domain controller or a backup domain controller configured. It is not possible to properly vary on a network server description without access to a controller. If you have multiple Integrated PC Servers on your AS/400 system, only one of them needs to be a controller. The others can access the controller using the interconnect function. Before you save or restore the local files for LAN Server/400, we recommend that you put the AS/400 system into a restricted state. A restricted state prevents workstations and jobs from using the system and, therefore, ensures that no changes can be made to the QLANSrv files during the save or restore process. You can put the AS/400 system into a restricted state by ending all of the subsystems. You can put only the Integrated PC Server into a restricted state by ending the monitor job. 6.2.5.1 Putting the AS/400 system into a restricted state This section shows you an example of how to put the AS/400 system into a restricted state. Use the ENDSBS command: ENDSBS SBS(*ALL) OPTION(*IMMED) DELAY(*NOLIMIT) This leaves only the system console operational. If you cannot put the AS/400 system into a restricted state, you must verify that no files are open using the When you put the AS/400 system into a restricted state, the Integrated PC Servers also enter a restricted state. When the Integrated PC Server is in a restricted state, the network server running on the Integrated PC Server is running, but it cannot be accessed by requesters. However, the network server can be accessed by functions running on the AS/400 system. This restricted state is equivalent to stopping the NETLOGON service on an OS/2 LAN server to perform backup functions. Note Chapter 6. Saving and restoring the integrated file system 133 Work with NWS Sessions (WRKNWSSSN) command or by using SBMNWSCMD CMD('NET FILE /S'). Be aware that some applications close files between writes, so users can actually be using a file that appears closed to the administrator. You should put the AS/400 system into a restricted state only when you want to save files that are stored on the AS/400 system itself. 6.2.5.2 Putting the Integrated PC Server into a restricted state This section shows you an example of how to put the Integrated PC Server into a restricted state. To put only the Integrated PC Server into a restricted state, end the monitor job. The monitor job runs in the QSYSWRK subsystem, and the job name corresponds to the name of the Integrated PC Server it is running. Figure 92 shows the domain controller, DCL10NWS, as active. Server ASL10NWS is in a pending state. In Figure 93 on page 134, you can see a monitor job for each of these servers. To end the monitor job, type 4 in the Options column. Figure 92. Displaying the monitor job example Work with Network Server Status System: SYSAS400 Server type . . . . . : *LANSERVER Type options, press Enter. 7=Display users 8=Work with configuration status 9=Work with aliases 10=Work with sessions 12=Display statistics 14=Restart server Domain Opt Server Status Text __ SYSAS400 Network server domain __ ASL10NWS PENDING *BLANK __ DCL10NWS ACTIVE *BLANK __ L10SRV INACTIVE Another Network Server __ TST10NWS INACTIVE *BLANK __ RJFTEST INACTIVE *BLANK Bottom Parameters or command ===> wrkactjob F3=Exit F4=Prompt F5=Refresh F6=Print list F9=Retrieve F11=Display type F12=Cancel F17=Position to 134 Backup Recovery and Media Services for OS/400 Figure 93. Ending the monitor job example 6.2.6 Integrated PC Server on or off? For some types of save and restore, the Integrated PC Servers must be varied on. For others, they need to be varied off. 6.2.6.1 Varied ON The Integrated PC Servers should be varied on when you want to access the data through the QLANSrv to save individual files or directories. 6.2.6.2 Varied OFF The Integrated PC Servers should be varied off when you want to save storage spaces in /QFPNWSSTG, QXZ1, or QUSRSYS (whether they are server storage spaces or network server storage spaces). Table 3 on page 148 shows the server status requirements for different save and restore operations. 6.2.6.3 BRMS/400 considerations Please note that we do not recommend the use of *EXIT in the backup control groups to vary off your Integrated PC Server prior to the save because there are different run times for this step. Besides, you may receive messages that you may want to answer. We recommend that you vary off the Integrated PC Server manually before you start to save your storage space. The vary off for Integrated PC Server can take several minutes. You should wait to see the all of the following messages before you start your save operation: CPC2665 - Vary off complete for network server SRVLS40A CPC2608 - Vary off complete for line ITSCTRN CPIA407 - Monitor job for network server SRVLS40A ended Work with Active Jobs SYSAS400 05/25/00 13:35:38 CPU %: .0 Elapsed time: 00:00:00 Active jobs: 49 Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message 8=Work with spooled files 13=Disconnect ... Opt Subsystem/Job User Type CPU % Function Status __ QPADEV0009 USERC INT .0 CMD-WRKNWSSTG DSPW __ QPADEV0011 USERA INT .0 CMD-WRKACTJOB RUN __ QSERVER QSYS SBS .0 DEQW __ QSERVER QPGMR ASJ .0 EVTW __ QSPL QSYS SBS .0 DEQW __ PRT01 QSPLJOB WTR .0 MSGW __ QSYSWRK QSYS SBS .0 DEQW __ ASL10NWS QSECOFR BCH .0 PGM-QFPAMON TIMW _4 DCL10NWS QSECOFR BCH .0 PGM-QFPAMON TIMW More... Parameters or command ===> F3=Exit F5=Refresh F10=Restart statistics F11=Display elapsed data F12=Cancel F23=More options F24=More keys Chapter 6. Saving and restoring the integrated file system 135 6.3 Save and restore strategies Because LAN Server/400 uses various parts of the AS/400 system, your strategy for saving and restoring LAN Server/400 and the data it manages will depend on your company’s needs. In this section, we show you how to back up the LAN server files. You need to incorporate these procedures into your normal AS/400 backup procedures so that they become routine. The most important single point about backing up your LAN server files is to be clear why you are saving objects and under which circumstances you plan to restore them. The ways in which you plan to restore objects determines how you should save them. You can also save your LAN Server/400 data by saving the entire storage space or you can save a portion of the storage space such as a directory or a file. 6.3.1 Performance impact Your save/restore strategy can have significant impacts on the time it takes to save the AS/400 system. Consider this carefully. When you use the same backup control group within BRMS/400 to save your IFS information, you can end up with different results depending on the status of your Integrated PC Server. By default, the link list (LINKLIST or *LINK) is set to save the entire IFS structure with the exception of QSYS.LIB and the QDLS file system. If your Integrated PC Server is in a varied on status, the control group saves the /QLANSvr file system (files and directories). If the Integrated PC Server is in a varied off status, the control group saves the /QFPNWSSTG file system (storage space). You receive CPFA09E messages indicating that the “object is in use” depending on the status of your Integrated PC Server, and the backup ignores that object. See 6.4.1, “Setting up BRMS/400” on page 138. Hint 136 Backup Recovery and Media Services for OS/400 6.3.2 Saving regularly Your backup strategy should include saving the storage spaces (also sometimes called network drives) regularly. Create several storage spaces. Store data that changes infrequently on different storage spaces from data that changes frequently. Using this strategy, you can save the storage space containing the infrequently changed applications or data less often, perhaps only when you save the entire AS/400 system. The following sections provide examples and tips on how to save the entire system, storage spaces only, and the other objects that are part of the LAN Server/400 product. It is important that you plan the frequency of your saves to ensure that you always have a usable backup available in the event of a system failure or disaster. For example, you may decide on the following strategy to ensure that your user data is thoroughly backed up. Save recommendations Save the network server storage spaces as a complete entity. This allows you to restore the majority of your data with one command. See LAN Server/400 Saving the entire storage space through /QFPNWSSTG is significantly faster than saving only a portion of the storage space through QLANSvr unless you are saving only a small portion of the storage space. For example, when saving through /QFPNWSSTG, the save/restore data rate is measured at about 2 GB to 8 GB per hour*. When saving through QLANSvr, the save/restore data rate is approximately 75 MB to 500 MB per hour*. However, if you must later restore the data, you must restore the entire storage space. If you save only a portion of the storage space, you can restore only the files you need. Also, you do not have to vary off the Integrated PC Server to perform this type of save operation. Finally, note that if you save or restore a directory from a storage space, performance is 2.5 to 3 times slower than saving a folder and its documents using the SAVDLO command and the RSTDLO command. * Note: The actual data rate achieved depends on a number of factors. Things that affect results are the CPU model, the tape drive you are using, the pool size, and the size of the files you are saving. Note BRMS/400 currently does not have the ability to perform incremental saves or to save changes since the last time you performed a full save using the backup list items. In addition, BRMS/400 does not allow you to perform saves for IFS information on remote systems. Both of these functions are available through the standard AS/400 interface using the SAV command. When designing your backup and recovery strategies using BRMS/400, you must consider these important limitations. BRMS/400 limitations Chapter 6. Saving and restoring the integrated file system 137 Administration (part of the IBM Online Library SK2T-2171) for additional information. If your environment involves a significant amount of daily change, you may find it better to save the entire storage space daily. Consider saving the domain control database (DCDB) or the E: drive either weekly or whenever you have made significant administration changes that include alias creations. See LAN Server/400 Administration (part of the IBM Online Library SK2T-2171) for detailed information. Consider saving your E: drive daily to a save file while the Integrated PC Server is varied off. This allows you to save any data that was in cache. If you do not make many changes to user profiles or aliases, keep fairly current copies of the E: drive on hand. If your data changes frequently, and you must keep the Integrated PC Server running, you must save at the directory level. If the AS/400 system takes too long to save at the directory level, consider saving the entire storage space (for which you must vary off the Integrated PC Server). Restoring individual directories with this technique is not easy. One way to restore directories (if you do not need the entire storage space) is to first create a temporary storage space in the QLANSrv directory. You can restore into the temporary storage space and selectively restore required files from the temporary storage space. After the restore is completed, you must delete the temporary storage space. 6.4 Saving IFS using BRMS/400 The integrated file system information is saved using a control group to perform the save. There are no BRMS/400 commands to save the IFS information. As an overview, BRMS/400 saves the various components of IFS information: • Configuration objects (network storage descriptions) are saved by using the *SYSGRP or by the *BKUGRP (item *SAVCFG) control groups. • Licensed Programs (QXZ1, for example) are saved using the *SYSGRP (item *IBM) control group. • Domain controller database (E: drive) is saved using the *BKUGRP (item *ALLUSR) control group. • Storage spaces (most are located in /QFPNWSSTG) are saved using the *BKUGRP (LINKLIST or *LINK item) control group. • Directories and files located in /QLANSrv are saved using the *BKUGRP (LINKLIST or *LINK item) control group. Besides using the default BRMS control groups, you can create your own control groups and backup list items to meet your save and restore requirements. To avoid wide variances in save time for QLANSvr, vary the Integrated PC Server off and back on. This cleans up OS/2 cache data, for example, which can cause the save times to increase. Hint 138 Backup Recovery and Media Services for OS/400 6.4.1 Setting up BRMS/400 With OS/400 V3R6 and V3R2, the IFS information that you want to save using BRMS/400 is recorded in a backup list that is called LINKLIST. The list type is *LNK. With OS/400 V3R1, BRMS/400 does not support either a backup list or a link list to save IFS data. See 6.6, “Saving and restoring V3R1 IFS data with BRMS/400” on page 146, for information on how you can save IFS under V3R1. You can create your own backup list to customize how you want to save the IFS directories. Once you have created your own backup list name and have added the entry of what you want to save in the list that you have created, you can use the list in a backup control group to save the IFS directories. A good example here is that you may want to create two backup lists: one that indicates a save when the Integrated PC Server is varied on and another list that indicates when the Integrated PC Server is varied off. We briefly address these considerations in 6.2.6, “Integrated PC Server on or off?” on page 134. In the example that follows, a backup list called FSOFF (backup list to save the /QFPNWSSTG storage space for LAN Server/400 when the Integrated PC Server is varied off) is added using option 1 (Add) from the list management function using the WRKLBRM command. This backup list shown in Figure 94 that we are creating is exactly the same as LINKLIST that BRMS gives you. Figure 94. Work with Lists List entries are added to the list using option 2 (Change) from the Work with Lists display to include the contents of the IFS directories that need to be saved or omitted. This list entry maps to the SAV command when the backup control group is run. Figure 95 shows the settings that we use for the FSOFF link list. Notice that in addition to the QSYS.LIB and the QDLS file systems, we also omit the /QLANSrv file system from the save since we are only interested in saving the entire server storage space in /QFPNWSSTG. Work with Lists SYS400A Position to . . . . . . Starting characters Type options, press Enter. 1=Add 2=Change 3=Copy 4=Remove 5=Display 6=Print Opt List Name Use Type Text 1 FSOFF *BKU *LNK _ ADSPLF *BKU *SPL Amit's Spool File backup list _ DAILYLNK *BKU *LNK Daily SAVES /QLANSrv/.../BRMS _ FLR *BKU *FLR Folder Storry _ ITSCID38 *BKU *FLR Folder Rhahn Chapter 6. Saving and restoring the integrated file system 139 Figure 95. Change Link List You can now add the backup list entries as a backup item in a backup control group. The BRMS/400 default backup group *BKUGRP contains the default list item LINKLIST. In our example, we add the FSOFF list item to the ITSOBKUP backup control group (Figure 96). Figure 96. Example of an *LNK list in a control group You can create similar backup link lists to save the file and directory information in /QLANSrv in the same manner that we created a separate backup link list to save the storage spaces only when the Integrated PC Server is varied off. In this case, you can omit the /QFPNWSSTG directory from your save since you cannot save this when the Integrated PC Server is varied on. The advantage of creating two separate backup link lists is to eliminate information errors from the job log indicating that the “object is in use” during your save operation. Change Link List (CHGLNKLBRM) Type choices, press Enter. List . . . . . . . . . . . . . . > FSOFF Character value Objects: _ Name . . . . . . . . . . . . . > '/*' Include or omit . . . . . . . *INCLUDE *INCLUDE, *OMIT _ Name . . . . . . . . . . . . . > '/qdls' Include or omit . . . . . . . > *OMIT *INCLUDE, *OMIT _ Name . . . . . . . . . . . . . > '/qsys.lib' Include or omit . . . . . . . > *OMIT *INCLUDE, *OMIT Name . . . . . . . . . . . . . > '/qlansrv' Include or omit . . . . . . . > *OMIT *INCLUDE, *OMIT + for more values _ Directory subtree . . . . . . . *ALL *ALL, *DIR, *OBJ Edit Backup Control Group Entries SYS400A Group . . . . . . . . . . : ITSOBKUP Default activity . . . . . *BKUPCY Text . . . . . . . . . . . ITSO Backup Control Group Type information, press Enter. Weekly Retain Save SWA Backup List Activity Object While Message Seq Items Type SMTWTFS Detail Active Queue _10 *SAVSECDTA ____ FIIIIII *NO _20 *ALLUSR ____ FIIIIII *NO *NO _30 *ALLDLO ____ FIIIIII *NO *NO _40 FSOFF *LNK F______ *NO *NO _50 *EXIT ____ *DFTACT 140 Backup Recovery and Media Services for OS/400 With V3R7 of OS/400, the default backup control group now contains a new keyword, *LINK, instead of LINKLIST for the backup items. By default, the backup control group *BKUGRP saves all of the IFS directories except for QSYS.LIB and QDLS file systems. Figure 97 shows an example of the backup control group in V3R7. Figure 97. Example of the *LINK list in the V3R7 control group 6.4.2 Managing IFS saves with BRMS/400 In the previous sections, we discussed how you can set up backup link lists for saving IFS information using BRMS/400 including considerations for varying on or varying off the Integrated PC Server. This section looks at the information that you should look for to ensure that your saves are complete. It also explains how you can use the Work with Link Information (WRKLNKBRM) command to restore files and directories from BRMS/400 saves. Once your save has completed, we strongly recommend that you check your job log and use the DSPLOGBRM command to ensure that the save has completed normally, and most importantly, that you do not have any authority problems. You should use the DSPLOGBRM command, the WRKMEDIBRM command, the WRKMEDBRM command, and the WRKLNKBRM command to verify your save. You can find some information in the QSYSOPR message queue or in the job log. Figure 98 shows an example of the DSPLOGBRM command output confirming that the save operation completed successfully. Edit Backup Control Group Entries SYS400B Group . . . . . . . . . . : *BKUGRP Default activity . . . . . *BKUPCY Text . . . . . . . . . . . Entry created by BRM configuration Type information, press Enter. ........................................................ Backup : Backup Items - Help : Seq Items : : : o *ASPnn - save an ASP : 10 *SAVSECDTA : o *IBM - save all IBM libraries : 20 *SAVCFG : o *LINK - save all integrated file system : 30 *ALLUSR : libraries : 40 *ALLDLO : o *QHST - save history information : 50 *LINK : o *SAVCAL - save calendar information : 60 *EXIT : o *SAVCFG - saves configurations : : o *SAVSECDTA - save security data : : More... : : F2=Extended help F11=Search Index F12=Cancel : F3=Exit : F20=Enlarge F24=More keys : F11=Display exits : : :........................................................: Chapter 6. Saving and restoring the integrated file system 141 Figure 98. Display BRM Log Information Figure 99 shows an example of the WRKMEDIBRM command. Figure 99. Work with Media Information You can see the saved information using the WRKLNKBRM command (Figure 100). Figure 100. Work with Link Information 6/18/00 Display BRM Log Information SYS400B 17:03:21 Position to . . . . 6/12/00 Begin processing for control group SAVSTGS type *BKU. Devices TAPLIB01 will be used for control group SAVSTGS type *BKU. Interactive users are allowed to stay active. Starting save of list LANSTGS to devices TAPLIB01. 3747 objects saved. Starting save of BRM media information at level *LIB to device TAPLIB01. 10 objects saved from library QUSRBRM. Save of BRM media information at level *LIB complete. SAVSTGS *BKU 0020 *EXIT SNDMSG MSG('SAVE for FULL-STORAGE-SPACE ENDED') TOUSR Control group SAVSTGS type *BKU processing is complete. Last run date for BRM maintenance was 06/06/00. System start up program executed. Work with Media Information SYS400B Position to Date . . . . . Type options, press Enter. 2=Change 4=Remove 5=Display 6=Work with media 7=Restore 9=Work with saved objects Saved Save Volume File Expiration Opt Item Date Time Type Serial Seq Date QDSNX 6/14/00 17:51:35 *FULL TUV708 116 7/19/00 QPFRDATA 6/14/00 17:51:35 *FULL TUV708 117 7/19/00 QS36F 6/14/00 17:51:35 *FULL TUV708 118 7/19/00 QUSRINFSKR 6/14/00 17:51:35 *FULL TUV708 119 7/19/00 LINKLIST 6/14/00 17:52:55 *FULL TUV708 123 7/19/00 QUSRBRM 6/14/00 18:02:57 *QBRM TUV708 124 7/19/00 Work with Link Information SYS400B 06/18/00 17:00:22 Type options, press Enter. 4=Remove 9=Work with directory information Opt Directory 9 /QFPNWSSTG /QFPNWSSTG/DRIVEK /QFPNWSSTG/DRIVEL /QLANSrv /QLANSrv/NWS /QLANSrv/NWS/RCHPID 142 Backup Recovery and Media Services for OS/400 You can enter 9 in the Opt column for a directory path on the Work with Directory Information display to see the directory information, the date, time, media volume, and the number of objects that were saved in a particular directory as shown in Figure 101. Figure 101. Work with Directory Information 6.5 Restoring IFS directories with BRMS/400 Security and authority to files and directories are as important to the restore operation as it was when saving the IFS information. If you want to restore an object from /QLANSrv, you should have the authority for this object from the LAN server's point-of-view. If you do not have enough authorities, you receive the message CPFA09C - Not authorized to object. Your restore operation fails with the message CPF3823 - No objects saved or restored. For additional information on security considerations for IFS, see LAN Server/400 Administration (part of the IBM Online Library SK2T-2171). 6.5.1 Restoring objects to /QLANSrv with BRMS/400 Before you can restore individual files or directories to /QLANSrv, you must ensure that the Integrated PC Server is in the varied on status or at least in a restricted state. You can restore objects using either the WRKLNKBRM command or the WRKMEDIBRM command. In the example shown in Figure 102, we want to restore (from the BRMS/400 link list) using the command: /QLANSrv/NWS/RCHPID/DSK/K/edel_k/BRMS Work with Directory Information SYS400B 06/18/00 17:02:51 Directory . . . . : /QFPNWSSTG Type options, press Enter. 4=Remove 5=Display 7=Restore 9=Work with objects Date Time Save Volume Expiration Objects Not Opt Saved Saved Type Serial Date Saved Saved 06/12/00 10:37:41 *FULL ABC592 07/17/00 2 0 06/12/00 11:54:32 *FULL ABC447 07/17/00 2 0 06/12/00 16:18:42 *FULL ABC877 07/17/00 2 0 06/12/00 17:04:34 *FULL QRS188 07/17/00 0 2 Chapter 6. Saving and restoring the integrated file system 143 Figure 102. Work with Link Information Figure 103 identifies the versions of the saves that BRMS/400 is aware of for the directory that we are planning to restore. If you are not planning to restore the entire directory, you can continue to “drill down” to the next level of information. Figure 103. Work with Directory Information You can now work with the objects that were saved and decide on which ones you want to restore from the list as shown in Figure 104 on page 144. Work with Link Information SYS400B 06/18/00 18:02:08 Type options, press Enter. 4=Remove 9=Work with directory information Opt Directory /QLANSrv/NWS/RCHPID/DSK/K/edel_k /QLANSrv/NWS/RCHPID/DSK/K/edel_k/ADSMSERV /QLANSrv/NWS/RCHPID/DSK/K/edel_k/ADSMSERV/DLL /QLANSrv/NWS/RCHPID/DSK/K/edel_k/ADSMSERV/DOC 9 /QLANSrv/NWS/RCHPID/DSK/K/edel_k/BRMS /QLANSrv/NWS/RCHPID/DSK/K/edel_k/PMSX The WRKLNKBRM command provides the ability to restore a single directory or multiple directories within the path. All of the directories are displayed in a hierarchy, and each of these directory paths can be restored individually. This is a different view to what you get using the native Work with Links (WRKLNK) command, where you have to select options to go to the next level in the hierarchy. Note Work with Directory Information SYS400B 06/18/00 18:02:28 Directory . . . . : /QLANSrv/NWS/RCHPID/DSK/K/edel_k/BRMS Type options, press Enter. 4=Remove 5=Display 7=Restore 9=Work with objects Date Time Save Volume Expiration Objects Not Opt Saved Saved Type Serial Date Saved Saved 06/11/00 23:24:24 *FULL DD0376 07/16/00 11 0 06/12/00 17:04:34 *FULL QRS188 07/17/00 11 0 9 06/18/00 17:56:27 *FULL ABC130 07/23/00 11 0 144 Backup Recovery and Media Services for OS/400 Figure 104. Work with Objects Our example shows the display in Figure 104 for information only. We restore the entire directory from the Work with Directory Information display using the latest version (volume ABC130). See Figure 105 and Figure 106. Figure 105. Select Recovery Items Figure 106. Additional Message Information As you can see, restoring the IFS information through BRMS/400 is relatively easier than restoring the same information using the AS/400 system RST command interface. With the RST command, you are required to type various command parameters correctly, along with the directory syntax, which often leads to several attempts before the restore function will work for you. For additional information on the SAV command and the RST command, see Backup and Recovery - Basic, SC41-4304. Work with Objects SYS400B 06/18/00 18:02:34 Directory . . . . : /QLANSrv/NWS/RCHPID/DSK/K/edel_k/BRMS Saved date/time . : 06/18/00 17:56:27 Type options, press Enter. 4=Remove 5=Display 7=Restore Volume Opt Object Serial Size ARC.SH ABC130 263213 BACKUP.SH ABC130 220739 BRM.EXE ABC130 459040 BRM.SH ABC130 688769 COST.SH ABC130 53792 Select Recovery Items SYS400B Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display 7=Specify object Saved Save Volume File Expiration Objects Opt Item Date Time Type Serial Seq Date Saved 1 DAILYLNK 6/18/00 17:56:27 *FULL ABC130 1 7/23/00 11 Additional Message Information Message ID . . . . . . : CPC370E Severity . . . . . . . : 00 Message type . . . . . : Completion Date sent . . . . . . : 06/18/00 Time sent . . . . . . : 18:06:17 Message . . . . : 11 objects restored. Cause . . . . . : 11 objects were restored from ABC130 sequence number 1 at 06/18/00 18:05:32. The restore operation ended on volume ABC130. Chapter 6. Saving and restoring the integrated file system 145 6.5.2 Restoring a storage space with BRMS/400 As with restoring files and directories, you have to use the WRKLNKBRM command to restore the storage space. Before you can start the restore operation, you must ensure that the Integrated PC Server is varied off. You can also use the WRKMEDIBRM command to restore the storage space if you prefer. In our example, we use the WRKLNKBRM command to restore two storage spaces (DRIVEK and DRIVEL) from the /QFPNWSSTG directory. On the WRKLNKBRM command, enter 9 in the Opt column for the /QFPNWSSTG directory. You see the Work with Directory Information display. Enter 9 in the Opt column for the saved version from which you want to restore your directory. The Work with Objects display is shown in Figure 107 and the Select Recover Items display is shown in Figure 108. Figure 107. Work with Objects Figure 108. Select Recovery Items Select option 7 on the Work with Objects display to restore the drives and the storage spaces in those drives. You can verify if the storage spaces have restored successfully by using the Work with Network Server Storage Spaces (WRKNWSSTG) command (Figure 109 on page 146). Work with Objects SYS400B 06/18/00 16:49:23 Directory . . . . : /QFPNWSSTG Saved date/time . : 06/12/00 10:37:41 Type options, press Enter. 4=Remove 5=Display 7=Restore Volume Opt Object Serial Size 7 DRIVEK ABC592 34816 7 DRIVEL ABC592 29184 Select Recovery Items SYS400B Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display 7=Specify object Saved Save Volume File Expiration Objects Opt Item Date Time Type Serial Seq Date Saved 1 LANSTGS 6/12/00 10:37:41 *FULL ABC592 1 7/17/00 3747 1 LANSTGS 6/12/00 10:37:41 *FULL ABC592 1 7/17/00 3747 146 Backup Recovery and Media Services for OS/400 Figure 109. Work with Network Server Storage Spaces You now need to link the storage names with appropriate drive letters using the Add Server Storage Link (ADDNWSSTGL) command or by selecting option 10 on the WRKNWSSTG display. You can now vary on the Integrated PC Server. This can take several minutes. Once the Integrated PC Server is active, you should check your LAN Server/400 environment with the WRKLNK command and by trying a few options from the NWSADM menu to ensure that everything is working correctly. 6.6 Saving and restoring V3R1 IFS data with BRMS/400 When Client Access for OS/400 is installed on V3R1, the new clients use the new integrated file system. A complete system save is not possible without performing the Save Object (SAV) command. Under V3R1, BRMS/400 does not support the SAV command. To ensure that you have a complete system save, use the following technique: 1. Create a library and a save file: CRTLIB IFSSAVF CRTSAVF IFSSAVF/IFS 2. Specify the following exits in the backup control group to save the IFS data to save file created earlier. Use a BRMS list to save the save file to tape so that you can store both media information, and also save history about the save. Alternatively, you can save the entire IFSSAV library if you do not want to use list entries. The following exit entries in the backup control group save the entire IFS and not just client access data: 10 *EXIT CLRSAVF IFSSAV/IFS 20 *EXIT SAV DEV('QSYS.LIB/IFSSAV.LIB/IFS.FILE') OBJ(('/*') ('QSYS.LIB' *OMIT) ('/QDLS' *OMIT)) 30 IFSSAV To recover the IFS data, add a step where you restore the data from the save files into library IFSSAV after the BRMS/400 recovery is complete. Use the Restore Objects in directories (RST) command as shown in the following example. Before you perform the RST command, ensure that you have varied off the Integrated PC server: RST DEV('/QSYS.LIB/IFSSAV.LIB/IFS.FILE') OBJ(('/*') ('/QSYS.LIB' *OMIT) ('/QDLS' *OMIT)) Work with Network Server Storage Spaces System: SYS400B Type options, press Enter. 1=Create 4=Delete 5=Display 6=Print 10=Add link 11=Remove link Percent Drive Opt Name Used Size Server Letter Text DRIVEK 7 500 500 MB Server RCHPID / DRIVEL 3 500 500 MB Server RCHPID / Chapter 6. Saving and restoring the integrated file system 147 6.6.1 Disaster recovery for LAN Server/400 environment with BRMS/400 Let's discuss the case where you have to recover the entire system, including LAN Server/400 environment. Your first step should be to follow the instructions in the BRMS/400 Recovery Report created after your last save using BRMS/400. 6.6.1.1 Recommendations Use the following process to restore your LAN Server/400 environment with BRMS/400: 1. The configuration objects, licensed programs, and objects in QUSRSYS restore in the normal way through the BRMS recovery commands. 2. During the recovery, let the Integrated PC Server remain in a varied off state. 3. Restore IFS information with default LINKLIST item in backup control group *BKUGRP. 4. After ending all of the restore steps in conjunction with the BRMS/400 Recovery Report, vary on the Integrated PC Server with your first IPL after the recovery. 5. Check the LAN Server/400 environment and try some options using the GO NWSADM menu. 6. Use the ADDNWSSTGL command to link your storage spaces to drive letters. 7. Vary on the Integrated PC Server again. 8. Use the WRKLNK command to check the status of your data in the /QLANSrv directory. 9. Use the WRKLNKBRM command to restore the latest save of your individual data in /QLANSrv (for example, your daily saves). 6.7 Save and restore hints Here are some other points that you should be aware of when you develop your save and restore strategy: With V3R1, restoring a configuration object for a network server description fails since the CRTNWSD command creates the device configuration and tries to copy QXZ1/QFPHSYS1 and QXZ1/QFPHSYS3 from QXZ1 to QUSRSYS as C: drive and E: drive. Since QXZ1 is not there during the restore operation, the RSTCFG command fails. This is because the SAVCFG command does not save the contents of C: drive, D: drive, and E: drive. All it does is save the description of the Integrated PC Server (network server description). Informational APAR II088a56 documents this restriction. With V3R6, V3R2, and V3R7, this problem has been circumvented. The RSTCFG command does not fail even when library QXZ1 does not exist. Once the user objects and IBM licensed programs are restored, you can re-run the RSTCFG command to restore the network server description configuration. Hint 148 Backup Recovery and Media Services for OS/400 • Differences exist between saving and restoring the LAN Server for OS/400 and saving and restoring other AS/400 objects. Parts of the LAN Server/400 product are stored in AS/400 objects, and parts are stored in the LAN Server for OS/400. See 6.2.2, “LAN Server/400 structure” on page 125, for an overview of the parts that make up LAN Server/400. • You can ensure that QLANSrv objects are available for saving by placing the AS/400 system in a restricted state. This prevents users from using the Integrated PC Server, but does not vary it off. Place the AS/400 system in a restricted state by ending all subsystems. • Storage spaces are not the same as other AS/400 objects. You must vary off the Integrated PC Server to save storage space objects, even when the AS/400 system is in a restricted state. • The SAV command locks LAN Server for OS/400 objects so that other users cannot write to them while the save operation is in progress. This lock may conflict with client workstations accessing LAN Server/400 files when they are using LAN Requester. • You cannot have objects opened with write access while using Save (SAV). 6.7.1 Save and restore options for LAN Server/400 Save and restore options for LAN Server/400 are outlined in Table 3. Table 3. Summary of save and restore options Objects saved Saved command Integrated PC Server varied on or off Restricted state AS/400 system “yes” or “no” Storage spaces located in /QFPNWSSTG and libraries QUSRSYS and QXZ1, and any national language version of QXZ1 for disaster recovery for the entire system. SAV DEV(‘/QSYS.LIB/TAP01.DEVD’) OBJ((‘/*’) (‘QLANSrv’ *OMIT)) Off yes Storage spaces of a specific network server located in /QFPNWSSTG on the local AS/400 system. SAV DEV(‘/QSYS.LIB/TAP01.DEVD’) OBJ (‘QFPNWSSTG/DISK1) Off N/A Files and directions located in /QLANSrv for disaster recovery and file restoration. SAV DEV(‘/QSYS.LIB/TAP01.DEVD’) OBJ((‘/*’) (‘QFPNWSSTG’ *OMIT)) On Yes Files and directions located in /QLANSrv that were changed or created within a date range saved to a file. Incremental backup. SAV DEV(‘/QSYS.LIBSAVTO.FILE’) OBJ(‘QLANSrv/*’) CHGPERIOD(mm/dd/yy) On Yes Files and directions located in /QLANSrv of a remote system that were changed or created within a date range saved to a file. Incremental backup. SAV DEV(‘/QSYS.LIBSAVTO.FILE’) OBJ(‘QLANSrv/*’) CHGPERIOD(mm/dd/yy SYSTEM(*RMT) On No Specific directories and files on a local system saved to a file. SAV DEV(‘/QSYS.LIBSAVTO.FILE’) OBJ(‘QLANSrv/NWS/SRVL40A/DSK/K/ FILE’) On Yes Chapter 6. Saving and restoring the integrated file system 149 Specific directories and files on a remote system saved to a file. SAV DEV(‘/QSYS.LIBSAVTO.FILE’) OBJ(‘QLANSrv/NWS/SRVOS2A/DSK/ D/rfiles’)SYSTEM(*RMT) On No LAN Server/400 licensed program. SAVLIB *NONSYS N/A Yes Storage space containing the DCDB; the name of the object is the name of the server followed by a “3”. SAVOBJ OBJ(SRVLS40A3) LIB(QUSRSYS) OBJTYPE(*SVRSTG) Off N/A Objects saved Saved command Integrated PC Server varied on or off Restricted state AS/400 system “yes” or “no” 150 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 151 Chapter 7. AS/400 hardware support for automated tape libraries This chapter discusses the various types of tape automation that are supported when you use your AS/400 system with BRMS/400. Originally, the only media library available was the 3494 Automated Tape Library Data Server. However, beginning with V3R1, other library devices, such as the 9427 tape library, 3590 tape library, and most recently, the 3570 Magstar MP tape subsystem, can be attached. Although the libraries attach to both CISC and RISC systems, only RISC systems fully implement the media library. Functional differences may, therefore, exist on the same library that is being shared by both CISC and RISC systems. We attempt to outline these differences in this chapter and subsequent chapters. Not all automated tape libraries can be attached to all models of the AS/400 system or used as alternate IPL devices. The announcement letters or your IBM Marketing Representative can provide you the required information. We recommend that you always refer to Automated Tape Library Planning and Management, SC41-5309, to obtain latest hardware and software configuration information relevant to the OS/400 release on which you are operating. This book contains updates to the library management functions by their releases and outlines the functional differences between the CISC and RISC OS/400 releases. 7.1 3494 Automated Tape Library Data Server The 3494 Automated Tape Library Data Server provides automated tape solutions for the AS/400 system user as well as for users of the ES/9000, RISC System/6000, and some non-IBM systems. The 3494 supports the 3490E models C1A and C2A, and the 3590 B1A tape drives. For additional information on the control unit models and storage unit models, refer to the IBM announcement letters or the iSeries Handbook, GA19-5486. 7.1.1 3494 Automated Tape Library Data Server system attachment The 3494 is attached to the AS/400 system with one connection for the library manager and one or more connections for the tape drives. The library manager connection uses a communications line that can be either EIA-232 or LAN. One communications line on the AS/400 system is required for each 3494. The tape drive connection can be a S/370 parallel channel (Feature #2644) for 3490E or SCSI attachment for 3590. The Electronic Communications Support (ECS) adapter on the AS/400 system should not be used to support the 3494. It is reserved for obtaining electronic customer support. 7.1.2 Connection considerations When calculating the maximum interface distance between the 3494 Automated Tape Library Data Server and the AS/400 system, you must consider both connections. 152 Backup Recovery and Media Services for OS/400 7.1.2.1 RS232 The RS232 connection allows the AS/400 system to talk to the 3494 through the Library Manager PC that comes with the 3494. The 3494 EIA-232 communications cable (Feature #5211) has a limit of 50 feet, unless modems are used to boost the signals. Feature #5213 provides a 400-foot cable for the RS232 attachment. The 3494 may be shared between eight AS/400 systems attached through the Library Manager using RS232. An expansion attachment card (Feature #5229) is required to support the fifth through eighth RS232 connections and the fifth through eighth tape control units. 7.1.2.2 LAN The 3494 can be attached to the AS/400 systems through the Library Manager using either a token-ring LAN (uses Feature #5219) or an Ethernet LAN (uses Feature #5220). Both TCP/IP and APPC connections are supported by the 3494, but only APPC is supported by the AS/400 system. The 3494 LAN communications cable limit is determined by the type of LAN implemented. Typical LAN technology supports connections at a distance of up to 1000 meters. If attaching through LAN, the 3494 can be shared between 16 AS/400 systems. Appendix C, “Example LAN configuration for 3494” on page 303, provides an example line, controller, and device configuration for attaching the 3494 to the AS/400 through a token-ring. The Tape Control Unit Expansion (feature #5228) expands the number of tape control units that can be attached to the Library Manager. One feature converts four RS232 host processor connections into four tape control unit connections on either the base Library Manager or the expansion attachment card (feature #5229). The following combination is possible: Available Available Number RS-232 ports Tape of #5228 (for direct Control Unit Features host attach) Connections Additional Features Required -------- ------------ ------------ ------------------------------- 0 4 4 None 0 8 8 #5229 Expansion Card 1 0 8 #5219 (#5220) LAN adaptor 1 4 12 #5229 Expansion Card 2 0 16 #5229 AND #5219 (#5220) The Remote Console Feature (Feature #5226) provides the capability of controlling and monitoring the status of up to eight 3494s from a remote location. The console can be password protected. 7.1.3 3494 Automated Tape Library Data Server: Multiple systems The 3494 can be shared by AS/400 systems, RS/6000 systems, and ES/9000 systems (a total of 16 systems). Other non-AS/400 systems share the library by partitioning the 3494 by assigning each cartridge to a specific category. The categories used by the AS/400 system are *SHARE400 and *NOSHARE, which equate to *YES and *NO on the Shared Media parameter of Media Class. These cartridges can only be used by AS/400 systems and cannot be accessed by non-AS/400 systems that are sharing the 3494. Chapter 7. AS/400 hardware support for automated tape libraries 153 The common media inventory function of BRMS/400 manages the *NOSHARE cartridges and the sharing of *SHARE400 cartridges between any of the attached AS/400 systems. 7.1.4 Alternate IPL support for the 3494 The 3494 device can be used as an alternate IPL (Alt IPL) device on the AS/400 system. This task requires that correct device addresses are set. Your IBM Service Engineer sets the correct hardware configuration settings on your 3494 and the AS/400 system. 7.2 9427 tape library The 9427 tape library is a 20-cartridge tape library based on the 8mm helical scan technology. It is available in two models (9427 tape library models 210 and 211). Model 210 is a stand-alone version of the tape library, and model 211 is the rack-mounted version. The 9427 tape library supports up to two 7 GB, 8mm drives. It includes a barcode reader that reads the labels on the cartridges to determine the cartridge identifier without the need to load the cartridges in the tape drives. Unlike the 3494, the 9427 tape library does not have an input/output slot. The 7 GB tape drive is based on the 7208 half-high product. Hardware changes were made to support the 160 meter tape media. The 160 meter tape media is required for 7 GB (uncompressed) data storage. The Initialize Tape (INZTAP) command supports the new density type of *FMT7GB for the 160 meter tape. The 160 meter tape media cannot be written to or read by the 7208 models 002, 012, and 232. The 7 GB drive also supports 112 meter tapes with formats *FMT2GB and *FMT5GB. Tape library commands and support are provided by OS/400 from V3R1. BRMS/400 is the preferred application program that assists with tape library manager and unattended operations. *NOSHARE cartridges may only be changed by the owning system and can only be seen with the WRKMLMBRM command by the owning system. Note With CISC AS/400 systems, you cannot use the 3590 tape as your preferred IBM distribution medium for obtaining PTFs, licensed programs, or an OS/400 release. However, you are allowed to use the 3590 device as your Alt IPL device through ordering a Request for Price Quotation (RPQ). See 7.3.1, “Alternate IPL for the 3590” on page 154, for more information on the RPQ. For RISC AS/400 systems, the 3590 is fully supported as your preferred distribution medium and as an Alt IPL device. Note 154 Backup Recovery and Media Services for OS/400 7.2.1 Alternate IPL support for the 9427 The 9427 tape library can be used as an alternate IPL (Alt IPL) device on the AS/400 system. This task requires that correct device addresses are set. Your IBM Service Engineer sets the correct hardware configuration settings on your 9427 and the AS/400 system. The 9427 tape library must be put in the sequential mode where it acts as an automated cartridge loader, when using it as an alternate IPL device. See IBM 9427 210 and 211 Operator’s Guide, SA26-7108, for information on how to set the 9427 tape library in sequential mode. 7.3 3590 with automated cartridge facility The 3590 tape drive uses high-capacity, 128-track bi-directional recording and can store up to 10 GB per cartridge. With the Lempel Ziv (LZ1) data compaction algorithm, the capacity can increase up to 30 GB of data on a single cartridge depending on the type of data you have. The 3590 model B11 is a rack mounted tape device that includes a 10-cartridge automatic cartridge facility that can be used in random mode as a mini-library providing up to 300 GB of unattended storage. The 3590 with automated cartridge facility operates as an Random Access Cartridge Loader (RACL) with the following features: • Contains up to 10 cartridges in a removable magazine • One 3590 tape device • Two host attachments The 3590 model B1A device is used to attach the drive in a 3494. 7.3.1 Alternate IPL for the 3590 The 3590 can be used as an alternate IPL (Alt IPL) device on the AS/400 system. Your IBM Service Engineer sets the correct hardware configuration settings on your 3590 and the AS/400 system. The cartridges are loaded from the bottom up. Tape cartridges must be mounted in the 9427 tape library magazine in the correct order. The first data cartridge of the installation must be placed in the first slot for the 9427 tape library drive. Note Chapter 7. AS/400 hardware support for automated tape libraries 155 7.4 3570 Magstar MP tape library The IBM 3570 Magstar MP tape library model B01/B11 is based on the IBM Magstar technology used for the 3590 tape device. The 3570 Magstar MP tape library is designed to provide the midrange systems with a tape solution with a lower price than the 3590 tape device. The device performance is 2.2 MB native and up to 6.6 MB with LZ1 compaction. The 3570 Magstar MP tape subsystem uses a new and unique data cartridge that is approximately half the size of the 3480/3490/3590 cartridges. The capacity is 5 GB per cartridge and up to 15 GB per cartridge with LZ1 compaction. The 3570 Magstar MP tape library is designed to operate with two 10-cartridge magazines providing random access to 100 GB to 300 GB of data. In addition to the data cartridges, a cleaner cartridge is stored in the subsystem and is available for automatic cleaning of the tape device. The 3570 Magstar MP tape library models use a cassette loading and transport mechanism (priority slot) to automatically transport the tape cassettes to and from the cassette magazines and the tape drive. 7.4.1 Managing cassettes and magazines for the 3570 When the 3570 Magstar MP tape library is attached to the RISC AS/400 systems in random mode, you can insert a cassette in the import/export slot, and the transport loader takes the cassette and inserts it in the tape drive if the user invokes a command on the AS/400 to use the cassette. When the tape drive has finished processing the cassette, the transport loader puts the cassette into an available slot in the magazine depending on the cartridge category. When you eject the cassette, it ejects from the slot and BRMS/400 movement performs the eject through the import/export slot to complete the movement. You must ensure that an operator is available to take out the cassettes as they are being ejected. If you are running in an unattended mode, you receive a message on QSYSOPR message queue indicating that the storage slot is full if you have more than one cassette. All of the remaining movements of the cassettes are queued until the slot is freed up. Your job performing the moves will not complete until all of the required cartridges are ejected. For CISC Systems, you must order RPQ 843860. This RPQ provides instructions on how to set up your 3590 as an alternate IPL device and provides the Model Unique Licensed Internal Code (MULIC) tape and Feature Unique Licensed Internal Code (FULIC) tape. The MULIC tape is for AS/400 systems (Models B through F). The FULIC tap is for Advanced Series AS/400 systems (Models 2xx through 3xx). You do not always need the MULIC or FULIC tapes. See the Backup and Recovery - Advanced, SC41-4305, for additional information. You also cannot use the 3590 device as your preferred IBM software distribution media for obtaining PTFs, software releases, or licensed programs. AS/400 systems using PowerPC technology (RISC) do not require MULIC or FULIC tapes. You can use the 3590 as your preferred IBM software distribution media. Note 156 Backup Recovery and Media Services for OS/400 On the CISC AS/400 systems, this slot is referred to as the convenience slot (the drive, and the documentation for the drive refer to this slot as the priority slot). When you insert a cassette in the priority slot, the transport loader takes the cassette and inserts it in the tape drive. When the tape drive has finished processing the cassette, it ejects it back in the same priority slot. The cassette does not use one of the available slots in the magazine. Until you remove the cassette or re-insert the cassette, the priority slot cannot be used for other operations. During the BRMS/400 media movement, the ejects do not schedule the cassettes to be removed one-by-one using the priority slot. The cassettes remain in the magazine, and you have to manually remove them by opening the library door. The library will always re-inventory the cassettes when the library door is closed. In both the preceding cases, the 3570 was set up in random mode. If you use the automatic cartridge loader (ACL) mode, the cassettes inserted using the priority slot or the import/export slot are inserted in one of the empty slots in the magazine. You should be in the ACL mode when you want to recover from your SAVSYS tapes. 7.4.2 Alternate IPL support for the 3570 The 3570 can be used as an alternate IPL (Alt IPL) device on the AS/400 system. The system should be in the automatic cartridge loader (ACL) mode for this. Your IBM Service Engineer sets the correct hardware configuration settings on your 3570 and the AS/400 system. Alternate IPL support for the 3590 is available on all RISC AS/400 systems and the Advanced Series CISC AS/400 systems (Models 2xx through 3xx). The support on CISC AS/400 systems is only available through RPQ 843910. This RPQ ships IBM service instructions for attaching the 3570 as an alternate IPL device and FULIC tape. You cannot obtain any IBM software distribution on the 3570 cassette, such as PTFs, OS/400 releases, and licensed programs. Note © Copyright IBM Corp. 1997, 2001 157 Chapter 8. AS/400 software support for automated tape libraries This chapter discusses the software support for tape automation on the AS/400 system. The original media library was the 3494. However, beginning with V3R1, support was added to OS/400 for library devices such as the 9427 tape library, the 3590 tape device, and most recently the 3570 Magstar MP tape subsystem. AS/400 with 64-bit PowerPC technology (RISC) fully implements the media library that gives greater flexibility to OS/400 to manage resources. However, it introduces some considerations for BRMS/400, especially managing location, media policies, and defining backup devices. As always, using the *MEDCLS parameter in BRMS/400 for a backup device is the most flexible. Another significant change in RISC is that the Media Library Device Driver (MLDD) code is no longer required for the 3494. Customer scenarios evolve to different combinations of architecture and function. To attempt to cover each possibility (and to remain current) is not within the scope of this redbook. The following chapters describe the functional areas of media library support. You should refer to Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171), for each release, and to the most current edition of Automated Tape Library Planning and Management, SC41-5309, for detailed information. There are also BRMS/400 workshops and support from IBM Availability Services personnel. Check with your marketing representative for further information. 8.1 Software support for automated tape libraries The total AS/400 solution for automated tape libraries is made up of the following components. See Figure 110 on page 158 for how these components interrelate. BRMS/400 BRMS/400 manages the media. It submits tape operations to the appropriate interface as a result of a BRMS/400 command or as part of the BRMS/400 control group process. OS/400 CL commands and APIs OS/400 manages the tape drives by providing functions such as varying on, varying off, and read and write operations. It also provides the native library XXXTAPCTG commands that are available to drive the media libraries independently of BRMS/400. All BRMS/400 commands are translated into native OS/400 commands internally. Media Library Device Driver (CISC only) Media Library Device Driver (MLDD) is only required on the AS/400 CISC systems when you have a 3494. The MLDD code translates BRMS/400 and OS/400 command requests to the Library Manager of the 3494. These requests, such as mount volume, are sent over a communications link from an AS/400 158 Backup Recovery and Media Services for OS/400 system to the 3494. On AS/400 RISC systems, these functions are incorporated into the licensed internal code. The MLDD software is shipped with the 3494 and is installed using the Restore Licensed Program (RSTLICPGM) command. As a result, two libraries are created: QMLD and QUSRMLD. The QMLD library contains commands and programs. The commands are loaded into QSYS at installation time. The QUSRMLD library contains user system configuration information. In addition to the two libraries, a subsystem, QMLDSBS, is created. Since the newest modification level of the MLDD and LM software contains all of the known fixes from the previous levels, we highly recommend that you upgrade these products when you upgrade OS/400. To check what fix level you installed for the LM code, go to the Library Manager PC in the 3494 and on the display, select Help. Then select the About option. This displays your current level of LM software. For information on new releases of the microcode (MLDD and LM), consult with your country hardware second-level support. They can pass the request to the 3494 trained hardware engineers. Library Manager The Library Manager (which is effectively a PC) is located inside the 3494. It manages the 3494 inventory and provides the interface to the accessor, or robot, to make it functional. The software code is installed and maintained by IBM Customer Engineers. See 8.4, “Library Manager for the 3494” on page 160, for additional information. Figure 110 shows the components that are involved. Figure 110. Overview of the automated tape library components Enter ATL commands (full ATL control) Drive Drive Drive ATL: Cartridge Store Vision Accessor Arm Library Manager Media Inventory Control Panel IOPs Comms or LAN AS/400: Licensed Internal Code OS/400 Licensed Programs MSE BRMS/400 MLDD (CISC) Specific driver programs (separate libraries and subsystems Backup, Recovery, Archive, Retrieve and media management commands CL commands Chapter 8. AS/400 software support for automated tape libraries 159 8.2 AS/400 with IMPI technology (CISC) V3R1 saw the introduction of Media and Storage Extensions (5763-SS1, Option 18). In this release, new media library devices were also added to OS/400. With the new media library device, a number of new OS/400 commands are added to work with cartridges and categories for tape automation. The tape commands are issued to the tape devices (9427 tape library, 3590 with automated cartridge facility, and 3570 Magstar MP tape subsystem). OS/400 continues to use MLDD for the 3494 Automated Tape Library Data Server (Figure 111). Figure 111. V3R1 or V3R2: OS/400 splits the command into MOUNT and SAVLIB V3R2 works in the same way as V3R1 in that tape commands are issued to the tape device and OS/400 uses MLDD for the 3494. Another difference with CISC AS/400 systems is that you have to use the Display Tape Status (DSPTAPSTS) command or the Work with Library Media using BRM (WRKMLMBRM) command to manage your tape libraries. You do not have a single command interface, such as the Work with Media Library Status (WRKMLBSTS) command, available with RISC AS/400 systems. 8.3 AS/400 with 64-Bit PowerPC technology (RISC) V3R6 represents the total integration of tape automation for the AS/400 system. Media library devices are now fully functional devices with configurations and resources. All OS/400 commands for tape and cartridges now use the media library device. The 3494 solution complexity has been reduced. The 3494 media library device driver (MLDD) application and the corresponding subsystems are no longer required because the functions are now handled by OS/400 (Figure 112 on page 160). These enhancements allow for multi-user environments and restricted state tape processing. New commands, such as WRKMLBSTS and Configure Media Library Device Description (CFGDEVMLB), are available to support your tape libraries. Mount command MLDD LM BRMS Device OS/400 SAVLIB LIB1 VOL(VOL01) Write on the mounted tape Example command: SAVLIBBRM for LIB(LIB1) using VOL(VOL01) 160 Backup Recovery and Media Services for OS/400 Figure 112. V3R6 LIC processes the MOUNT command instead of MLDD V3R7 provides the same functions as V3R6. There are some ease-of-use enhancements to the WRKMLBSTS command. 8.4 Library Manager for the 3494 Library Manager (which is effectively a PC) is integrated in the 3494 Automated Tape Library Data Server. In normal use, Library Manager is transparent to users. However, it is possible to operate the 3494 to mount and demount cartridges without OS/400 commands using the stand-alone device mode. You can select this mode from the library manager display in the back of the 3494. The following sections document how you can mount and demount cartridges. You must also be in the stand-alone mode when you are recovering from your SAVSYS tapes. 8.4.1 Mounting a single volume from the 3494 This section shows you how to mount a single volume from the 3494. Follow these steps: 1. To select the manual mount and demount of the cartridges, from the menu bar, select Commands->Stand-alone device->Setup stand-alone device as shown in Figure 113. The Setup Stand-alone Device display appears as shown in Figure 114. LM BRMS Device OS/400 Licensed Internal Code SAVLIB LIB1 VOL(VOL01) Example command: SAVLIBBRM for LIB(LIB1) using VOL(VOL01) When you use the stand-alone mode of operation with the AS/400 RISC systems, you have to use the tape device description (TAPxx) and not the tape library device description (TAPLIBxx or TAPMLBxx) to address the drive. Within BRMS/400, you can use the WRKMLBBRM command to put the device on hold if you want to use the library in a stand-alone mode. Note Chapter 8. AS/400 software support for automated tape libraries 161 Figure 113. Commands pull-down window Figure 114. Setup Stand-alone Device window 2. On this window, choose one of the following options: • Click the arrow to the right of the box, and select the 3-digit device name. • Enter the 3-digit device name. 3. Select the Mount a single volume operation. 4. In the Volser field, enter the volume ID of the SAVSYS tape that contains the Licensed Internal Code tape. Once the mount operation is requested, the Stand-alone Device Status window is shown. This window displays the library manager activity, as shown in the example in Figure 115 on page 162. 162 Backup Recovery and Media Services for OS/400 Figure 115. Mount complete window 8.4.2 Demounting a single volume from the 3494 When a new cartridge is required, you must demount the cartridge, using the Demount a single volume command, or you can select the Reset stand-alone device operation shown in Figure 120 on page 164. Specify the tape drive by number. Once the demount operation is finished, you see the Library Manager window shown in Figure 116. Figure 116. Stand-alone Device Status window After the demount completes, you can select the mount of another cartridge using the stand-alone mode shown in Figure 114 on page 161. 8.4.3 Mounting a cartridge from the convenience I/O station To mount a cartridge from the convenience I/O station of the 3494 and move it back to the convenience station after use, you have to select the transient mode. This mode is required if the cartridges you want to mount and process are not labeled (for example, MULIC, PTF, or software upgrade cartridges) or you want to use labeled cartridges, but they should not be placed inside the 3494 after use. 1. From the menu bar, select Commands->Stand-alone device->Setup stand-alone device as shown in Figure 113 on page 161. Chapter 8. AS/400 software support for automated tape libraries 163 Figure 117. Setup transient mode on the Setup Stand-alone Device window 2. From this window (Figure 117), perform either of the following options: • Enter the 3-digit device name as known by the library manager. • Click the arrow to the right of the box, and select the 3-digit device name by clicking it. 3. Select the Mount from Input Station operation. 4. Click the OK button. The Mount from Input Station window in Figure 118 shows where you find the instructions on how to continue the transient mode. Figure 118. Mount from Input Station window Once the mount operation is requested, the Stand-alone Device Status window is shown. This window displays the library manager activity as shown in Figure 119. 164 Backup Recovery and Media Services for OS/400 Figure 119. Mount complete window Note: The Volume ID of the cartridge is not displayed in the window. 8.4.4 Resetting the stand-alone mode When you are finished using the stand-alone mode, reset it. For the 3494 Automated Tape Library Data Server, select the Reset stand-alone device operation shown in Figure 120. Specify the tape unit by number. Figure 120. Reset Stand-alone Device window The Reset stand-alone device operation unloads the mounted cartridge from the selected device and makes the device ready for tape automation. © Copyright IBM Corp. 1997, 2001 165 Chapter 9. Implementing automated tape libraries This chapter discusses some of the actions required to set up automated tape libraries in BRMS/400. It describes the OS/400 commands that relate to these libraries and the management of cartridges within them. 9.1 Configuring the 3494 Automated Tape Library Data Server for CISC The 3494 differs from other libraries because it also requires separate communications. To configure communications, use the Add Media Library Device (ADDMLD) command. Three jobs control communications to the 3494: • QMLMAIN: Converts AS/400 system MLDD commands to library manager commands. • QMLCOM: Contains the communications link program. • QMLTRACE: Logs the 3494 Automated Tape Library Data Server trace details. Figure 121 shows the Work with Active Jobs display with the 3494 Automated Tape Library Data Server communications jobs running in the QMLDSBS subsystem. Figure 121. Work with Active Jobs The communications line is varied on, and these jobs are started by issuing the INZMLD command. You can use the End Media Library Device (ENDMLD) command to end the jobs if you need to perform a problem analysis or error recovery. You should use the INZMLD command again to restart them. After an IPL, or when the system is returned from a restricted state by using the STRSBS QCTL command, this command is automatically re-issued in an autostart job entry that runs when the QMLDSBS subsystem is started. The MLDD commands operate differently than most other AS/400 commands. They are asynchronous in nature in that you enter the command, and a completion message is sent to a message queue after the requested process has Work with Active Jobs MM/DD/YY 00:00:00 CPU %: 5.1 Elapsed time: 00:01:57 Active jobs: 40 Type options, press Enter. 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message 8=Work with spooled files 13=Disconnect ... Opt Subsystem/Job User Type CPU % Function Status QMLDSBS QSYS SBS .0 DEQW QMLMAIN QPGMR BCH .0 PGM-QMLMPMAIN DEQW QMLCOM QPGMR BCH .0 PGM-QMLRSMAIN DEQW QMLTRACE QPGMR BCH .0 PGM-QMLTRMAINR DEQW 166 Backup Recovery and Media Services for OS/400 completed. Most MLDD commands have a message queue name field for you to specify where the completion message is to be sent. When using BRMS/400 with the 3494, BRMS/400 calls OS/400 commands (that drive all ATLs). Where appropriate, some of these MLDD commands are called by OS/400. Typically, operators and users never have to use the MLDD commands explicitly. When BRMS/400 performs a SAVSYS operation on CISC AS/400 system, it premounts a cartridge on a 3494 before it ends the MLDD subsystem. This allows the SAVSYS operation to continue under a restricted state. You should not end the subsystems to a restricted state using an exit in your backup control group as BRMS/400 automatically premounts one cartridge for you. If your SAVSYS requires more than one cartridge, you must use a different approach. See 9.6, “Restricted state automation for the 3494” on page 189, for a possible solution. For more information on the MLDD commands, see Automated Tape Library Planning and Management, SC41-5309, or IBM 3494 User’s Guide: Media Library Device Driver for the AS/400, GC35-0153. 9.2 Configuring other media library devices for CISC For V3R1 and V3R2, you need to use the command: CRTDEVMLB DEVD(TAPMLB01) TAPDEV(TAPxx) Here, xx is the device name obtained from the WRKCFGSTS *DEV TAP* command. You can also select your own tape library device name instead of using TAPMLB01. Figure 122 shows the Create Device Media Library (CRTDEVMLB) display that appears after you press the Enter key. Figure 122. Create Device Media Library: V3R1 and V3R2 If the 9427 and the 3570 libraries have two tape devices and are attached to a single AS/400 system, do not use the CRTDEVMLB command separately for each tape device, but specify both here. If the 9427 or the 3570 library is being shared between two AS/400 systems, each system must have a media library description with one tape device. Create Device Media Library (CRTDEVMLB) Type choices, press Enter. Device description . . . . . . . DEVD tapmlb01 Tape device . . . . . . . . . . TAPDEV tap03 + for more values Chapter 9. Implementing automated tape libraries 167 After you create the media library device description, you must remember to run the INZBRM OPTION(*DEVICE) command on a V3R2 system, or the INZBRM OPTION(*DATA) command on a V3R1 system to add the media devices into BRMS/400. 9.3 Configuring media library devices for RISC In V3R6 and V3R7, the media library device descriptions are fully implemented under OS/400 and are required for all library devices including the 3494 Automated Tape Library Data Server. The media library device descriptions are created automatically if auto-configuration is enabled (*YES). They are created as a TAPMLBxx (TAPLIBxx for earlier versions of V3R6) device description, where xx is the next available device description number. The tape devices within the library are configured as media library resources (MLBRSCs) with resource names TAPxx. In addition to the media library device description with tape resources, tape device descriptions are created for each tape resource. These tape device descriptions are used for stand-alone operations. 9.3.1 Determining resource names Creating the media library device description for all media library devices requires you to know the resource name. This is only required when you have the automatic configuration system value (QAUTOCFG) turned off. The DSPHDWRSC *STG command provides the resource names associated with the tape libraries, controllers, and devices (Figure 123 on page 168). You cannot use the WRKCFGSTS command or the WRKDEV command to display the library device configuration. Instead, you have to use the DSPTAPSTS command to display the library devices and slot information associated with the library device. The CRTDEVMLB command only creates a pseudo device configuration entry internally in OS/400 to support tape libraries on CISC AS/400 systems. Note 3494 media library devices cannot be varied on until the ROBOTDEV (robot device) parameter is updated. This parameter refers to the communications line associated with the library manager PC and only applies to the 3494. See 9.3.3, “Creating a Robot Device Description (ROBOTDEV) for the 3494” on page 170, for details. Note 168 Backup Recovery and Media Services for OS/400 Figure 123. Display Storage Resources Library names are automatically allocated by the system beginning with TAPMLB01 and continuing with TAPMLB02 and so on as shown in Figure 123. You can select option 9 to view resources. When you do, the Display Associated Resources screen appears (Figure 124). Figure 124. Display Associated Resources 9.3.2 Creating media library device descriptions You need to create the media library device description for all the media library devices if QAUTOCFG is turned off. You can use the CRTDEVMLB command as follows: CRTDEVMLB MLB(TAPMLB01) RSRCNAME(TAPMLB01) DEVCLS(*TAP) Figure 125 shows the Create Device Description for Media Library (CRTDEVMLB) command display that is shown after you press the Enter key. Display Storage Resources System: SYSTEM01 Type options, press Enter. 7=Display resource detail 9=Display associated resources Opt Resource Type Status Text _ SI02 2621 Operational Storage Controller _ TAPMLB01 9427 Operational Tape Library _ SI04 2624 Operational Storage Controller _ DC07 6390 Operational Tape Controller _ DC06 6390 Operational Tape Controller _ SI05 2644 Operational Storage Controller 9 TAPMLB02 3494 Operational Tape Library Display Associated Resources System: SYSTEM01 Type options, press Enter. 5=Display configuration descriptions 7=Display resource detail Opt Resource Type-Model Status Text _ TAPMLB02 3494-010 Operational Tape Library _ TAP07 3490-C2A Operational Tape Unit _ TAP06 3490-C2A Operational Tape Unit Chapter 9. Implementing automated tape libraries 169 Figure 125. Creating a device media library: V3R6 and V3R7 The reverse bold numbers that follow correspond to the reverse bold numbers shown in Figure 125: 1 If the 3494 is auto-configured, it is auto-configured ONLINE(*NO) because you should not attempt to vary it on until the ROBOTDEV parameter is filled in correctly. For all other non-3494 tape libraries, the CRTDEVMLB sets the ONLINE parameter to *YES. For the 3494, you have to use the CFGDEVMLB command to define the communications link. See 9.3.3, “Creating a Robot Device Description (ROBOTDEV) for the 3494” on page 170, for details. 2 On AS/400 RISC systems, you issue such commands as SAVLIB to the media library device. The media library device chooses the tape resource for you if one is available. OS/400 queues requests until an appropriate tape resource becomes available. The default is to wait for one minute, as specified by *SYSGEN in the Maximum device wait time parameter (MAXDEVTIME). If a tape resource is not available, you receive a message on QSYSOPR indicating that the tape resource is not available. The MAXDEVTIME parameter specifies the maximum number of minutes a request will wait for allocation of a tape resource. If the time is reached, a message is sent to QSYSOPR indicating device allocation time-out. If you specify a value other than *SYSGEN, such as 120 minutes, OS/400 will queue your tape resource request until a maximum of 120 minutes before sending a message to QSYSOPR. The help text in V3R6 is misleading for this parameter and suggests (wrongly) that this is the amount of time that the tape remains loaded. See 9.3.5, “Allocating resources” on page 174, for more details on resource allocation. Specifying a value of *SYSGEN means that i1.DFTWAIT, the default wait time of the job attributes, is used instead of a global value for all users using this particular media library device. You can view the Default wait time (DFTWAIT) value by running the Display Job (DSPJOB) command. DFTWAIT time is specified as 30 seconds, but tape management rounds this to the nearest Create Device Desc (Media Lib) (CRTDEVMLB) Type choices, press Enter. Device description . . . . . . . TAPMLB01 Name Device class . . . . . . . . . . *TAP *OPT, *TAP Resource name . . . . . . . . . TAPMLB01 Name, *NONE Device type . . . . . . . . . . *RSRCNAME *RSRCNAME, 3494, 3495, 3590... Online at IPL . . . . . . . . 1 *YES *YES, *NO Unload wait time . . . . . . . *SYSGEN *SYSGEN, 1-120 Maximum device wait time . . . 2 *SYSGEN *SYSGEN, *NOMAX, 1-600 Generate cartridge ids . . . . 3 *VOLID *VOLID, *SYSGEN Robot device description . . . 4 *NONE Name, *NONE Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Text 'description' . . . . . . . MLB DEVICE DESCRIPTION F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 170 Backup Recovery and Media Services for OS/400 minute so the minimum wait time is actually one minute. The *SYSGEN value allows batch jobs and interactive jobs to run with a different wait time. You should avoid using a value of *NOMAX for the maximum device wait time parameter. 3 The Generate cartridge identifiers field is only valid for media library devices that do not have vision systems or barcode readers for reading cartridge labels (for example, the 3590 tape device and the 3570 Magstar MP tape subsystem). When an inventory change is detected and *VOLID is specified, the media library device loads all tape volumes to attempt to read the volume identifiers from the media. This is fast on the 3570 Magstar MP tape subsystem, but it can take approximately 10 minutes for a full library on the 3590 tape device. Non-labeled tapes, blank tapes, cleaning tapes, and error situations result in system-generated cartridge identifiers. The value *SYSGEN means that cartridges are not loaded but are assigned system generated cartridge ID. This can cause confusion with BRMS/400, which manages media using VOLIDs. Therefore, this value is not recommended for use with BRMS/400. The default value of *VOLID is appropriate for BRMS/400. To learn how to set up the tape devices, see 7.3, “3590 with automated cartridge facility” on page 154, for the 3590 with automated cartridge facility, and 7.4, “3570 Magstar MP tape library” on page 155, for the 3570 Magstar MP tape library. 4 The CRTDEVMLB command is used for all media library devices, but the Robot Device Description parameter only applies to the 3494. 9.3.3 Creating a Robot Device Description (ROBOTDEV) for the 3494 The 3494 requires a communications interface for the library functions. The communication interface can either be RS232 or LAN. Before the 3494 media library device can be varied on, the communication interface needs to be specified in the ROBOTDEV parameter in the media library device description. The Configure Device Media Library (CFGDEVMLB) command connects the media library device description with the communication interface for media library devices. The CFGDEVMLB command configures the necessary communication information based on the input to the command, updates the necessary information in the device description specified, and attempts to vary on the media library device description. The CFGDEVMLB command must be issued once for each media library device description that uses a communication interface, although one line, controller, and device description is actually used for each Library Manager PC. 9.3.3.1 Creating an RS232 configuration To configure the ROBOTDEV parameter for a media library device using an RS232 interface, use the following command as an example: CFGDEVMLB DEV(TAPLIB01) ADPTTYPE(*RS232) RSRCNAME(CMN01) This creates the line, controller, and device description under the line resource called CMN01. Figure 126 shows the Configure Device Media Library Chapter 9. Implementing automated tape libraries 171 (CFGDEVMLB) display. To determine the correct resource name that should be used for this command, use the command: WRKHDWRSC TYPE (*CMN) Figure 126. Configure Device Media Library - RS232 9.3.3.2 Creating the LAN configuration To attach your 3494 to the AS/400 system through LAN, you need to perform the following steps in order: 1. On the AS/400 system, create a LAN line description. In our example, we use a token-ring as our LAN interface, and it is called TRN3494. See Appendix A, “Summary of changes” on page 289, for a sample description. You do not need to create the APPC controller or the APPC device descriptions. These are created automatically by the Configure Device Media Library (CFGDEVMLB) command. 2. If you have already set up your 3494 Library Manager, go to step 3 on page 172. If you have not set up your 3494 Library manager, perform the following tasks: a. On the AS/400, enter: DSPLANMLB LIND(TRN3494) You will see a display similar to the example in Figure 127 on page 172. Configure Device Media Library (CFGDEVMLB) Type choices, press Enter. Library device . . . . . . . . . > TAPLIB01 Name, F4 for list Adapter type . . . . . . . . . . *RS232 *RS232, *LAN Communication resource name . . > CMN01 Name Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys The RS232 line, controller, and device descriptions are created with ONLINE(*NO). Do not vary them on. They are varied on as needed internally by OS/400 when the tape media library is varied on. Note 172 Backup Recovery and Media Services for OS/400 Figure 127. Display LAN Media Library Information b. Write down the following values from the Display LAN Media Library Information. Write down the values for the following reverse bold numbers that correspond to the reverse bold numbers shown in Figure 127: • 1 Host network ID . . . APPN • 2 Host location name . . SYSTEM01 • 3 Host adapter address . 40101010101 Note: These are sample values. Change these to meet your installation requirements. c. Go to your 3494 Library Manager PC, and select the Commands option from the menu bar. d. Select Add LAN Host, and add host (your AS/400) information using the values obtained from the DSPLANMLB command in step b. 3. On the Library Manager PC, select Commands from the menu bar. 4. Select LM LAN Information. Make note of the following information: • Library Location Name 4 • Adapter Address 5 • On the AS/400, use the CFGDEVMLB command to configure the robot name (ROBOTDEV) as follows: CFGDEVMLB DEV(TAPMLB01) Sample library device name. ADPTTYPE(*LAN) LIND(TRN3494) Line name created in step 1. RMTLOCNAME(APPN.MLD01) Information obtained from step 4 4. APPN is the network ID on the AS/400. ADPTADR(11221122112) Sample adapter address for the Library Manager obtained in step 4 5. 9.3.4 Changing media library device descriptions If your installation has different standards, or if you need to preserve naming conventions, you may need to change the device descriptions to a common set across the attached network of systems. We recommended that you standardize Display LAN Media Library Information Use the following information from the system to configrue the library manager console. Select APPC as the communication protocol. Host transaction program name . . . . : " "QMLD/QMLDSTRCC" Host network ID . . . . . . . . . . . : APPN 1 Host location name. . . . . . . . . . : SYSTEM01 2 Host adapter address. . . . . . . . . : 40101010101" 3 The CFGDEVMLB command will automatically vary on the line, controller, and device description for you. Hint Chapter 9. Implementing automated tape libraries 173 unique descriptions for your tape devices and configure each system to use those descriptions. For BRMS/400 to work optimally with movement and shared devices, the device name on each AS/400 system must be the same. For example, TAPMLB01 on SYSTEM01 must be the same physical library as TAPMLB01 on SYSTEM02. This can be accomplished easily by taking into consideration the points that are presented in the following sections. 9.3.4.1 Changing the device description on CISC For V3R1 and V3R2, the media library device description is a partial implementation. To change the media library device description for the 9427 tape library, the 3590 tape device, and the 3570 Magstar MP tape subsystem, the old description is deleted with the Delete Device MLB (DLTDEVMLB) command. Then a new description is created with the Create Device MLB (CRTDEVMLB) command. The 3494 Automated Tape Library Data Server uses MLDD for adding and removing the device. To change the 3494 Automated Tape Library Data Server MLD name, end MLDD (ENDMLD), remove the old MLD device (RMVMLD), and add the new one (ADDMLD). Initialize MLDD and begin using the new device name. For more information on the MLDD commands, see IBM 3494 User’s Guide: Media Library Device Driver for the AS/400, GC35-0153. For V3R1 and V3R2, the tape library commands (WRKTAPCTG, DSPTAPSTS, and so on) are issued with the media library device specified. However, tape commands (SAVLIB, SAVCHGOBJ, and so on) are issued with the tape device specified. To change the tape device description, use the following command: WRKCFGSTS *DEV TAP* Whenever you make changes to the media library device descriptions, it is important that you make the changes in BRMS/400 to reflect the correct storage location in the media policies, move policies, device descriptions, and system policies. Important If the Autocreate Controller parameter on the LAN line description is set to *YES, you receive a configuration error when you use the ADDMLD command, because it automatically creates the controller description before you have the chance to create the new description yourself. This happens when you wait too long between the RMVMLD command and the ADDMLD command. For a LAN-attached 3494 Automated Tape Library Data Server, MLDD requires the library device description to be in the form MLDxxxxx. Do not use the same name for the media library device description and the Library Manager remote location name. This results in a duplicate device description error. Note 174 Backup Recovery and Media Services for OS/400 Vary off the device description, and select option 8 (Work with Device Description) and option 7 (Rename). Once the device description is renamed, vary on the new device description, and it is ready for use. 9.3.4.2 Changing the device description on RISC On RISC AS/400 systems, you have the media library device description and the tape resource names that show up when you use the WRKMLBSTS command. The media library device descriptions can be renamed using the WRKDEVD command or using option 8 from the WRKMLBSTS display. You need to vary off the media library device description first. To change the tape resource name, you have to vary off the media library device description and start System Service Tools (SST). Let us assume that you currently have a tape resource called TAP11 and you want to change it to be TAP02. Your media library device description is called TAPMLB01. Use the following steps as a guide for changing the resource names: 1. Vary off the library device TAPMLB01. 2. Type STRSST. 3. Select the following options: • Option 1 (Start Service Tool) • Option 7 (Hardware Service Manager) • Option 2 (Logical hardware resources (buses, IOPs, and controllers)) • Option 1 (System Bus Resources) 4. Locate the IOP and select resources associated with the IOP. For example, selecting option 9 for Storage IOP 6501-001 displays the results shown in Figure 128. Figure 128. Locating and selectomg resources associated with the IOP 5. Select option 2 (Change Detail) to change the tape unit resource name to a new name. 6. Exit from SST. 7. Vary on TAPMLB01. We recommend that you also change the tape device name, which is still called TAP11, to match the new tape resource name of TAP02. To rename the tape device description from TAP11 to TAP02, use the command: WRKDEVD TAP11 You then have to vary on the new tape device description. 9.3.5 Allocating resources The Work with Configuration Status (WRKCFGSTS) command has been updated to handle the new media library devices. To work only with media library devices, Resource Opt Description Type-Model Status Name _ Storage IOP 6501-001 Operational SI01 _ Tape Library 3494-012 Operational TAPMLB01 _ Tape Unit 3590-B1A Operational TAP11 Chapter 9. Implementing automated tape libraries 175 specify CFGTYPE(*DEV) and CFGD(*MLB). This displays the media library devices on the system and their current status and activity. For media libraries, the preferred command is Work with Media Library Status (WRKMLBSTS). It provides the same function as the WRKCFGSTS command, plus a new function to manage the tape resources associated with the media library. Each tape resource has an ALLOCATION STATUS associated with it as shown in Figure 129. Figure 129. Work Media Library Status: V3R6 The possible allocation status values are: • Allocated: The tape resource is available for use in the library device and the resource has been assigned (or reserved) to this system. No other system can use this tape resource. The tape resource is available to the resource manager. The allocation status change and assign are done when the media library device is varied on. Or, if the media library device is varied on, it stays assigned to the system until it is changed using the WRKMLBSTS command. • Unprotected: The tape resource is available for use in the library device and the resource has not been assigned or reserved to this system. The tape resource is available to the resource manager. Any attached system can share this tape resource. As a request comes to the resource manager for a tape resource, an assign/reserve (this command is executed at the Licensed Internal Code level and you cannot see it) command is attempted to the device. If the system cannot obtain an assign/reserve, other available resources are used. If no other resources are available, the system waits for an available resource to successfully obtain an assign/reserve to the system. The wait is based on the MAXDEVTIME parameter in the device description. It is possible for a resource to be released and reassigned by another system before this system can obtain a successful assign. • Deallocated: The tape resource is not available to the resource manager. Requests to media library devices with no tape resources in ALLOCATED or UNPROTECTED status result in an error message due to device allocation time out. Work with Media Library Status System: SYSTEM01 Type options, press Enter. 1=Vary on 2=Vary off 3=Reset drive 4=Allocate drive 5=Allocate unprotected 6=Deallocate drive 8=Work with description 9=Work with volumes Device/ Opt Drive Status Allocation _ TAPMLB01 VARIED ON _ TAP01 OPERATIONAL UNPROTECTED _ TAP02 OPERATIONAL DEALLOCATED _ TAPMLB02 VARIED OFF _ TAP03 OPERATIONAL ALLOCATED 176 Backup Recovery and Media Services for OS/400 • Stand-alone: The tape resource is not available. It has been DEALLOCATED and has been varied on to the stand-alone tape device description. This status is new for V3R7. When using BRMS/400, device allocations can be manipulated when systems are sharing one tape library. This can be done by using the UNPROTECTED status and device wait times. Or it can be done by using ALLOCATE and DEALLOCATE requests through the Vary Config (VRYCFG) command in the control group exits (*EXIT). You can use the control groups to ALLOCATE and DEALLOCATE resources and use job scheduler to coordinate the job dependencies. In V3R7, the Work Media Library Status (WRKMLBSTS) command was enhanced with new capabilities and ease of use changes. Status and resource allocation have been separated into two displays. The primary display (Figure 130) shows the resource status. From this display, the media library device can be varied on or off. Figure 130. Work with Media Library Status: V3R7 Function key F11 invokes the resource allocation display (Figure 131). This display is used to view the current and requested allocation of the device resources. Current allocation refers to the present state of the device resource, and requested refers to the state that occurs when the media library device is varied on. If the device resource is varied on for a stand-alone tape device description, “Stand-Alone” is displayed in the Current allocation field. Option 8 (Work with Description) was enhanced to work with the tape device description associated with the tape device resource. Option 10 (Configure device) invokes the Configure Device Media Library (CFGDEVMLB) command used to configure the 3494 Library Manager communication line (ROBOTDEV). Work with Media Library Status Type options, press Enter. 1=Vary on 2=Vary off 3=Reset resource 4=Allocate resource 5=Allocate unprotected 6=Deallocate resource 8=Work with description 9=Work with volumes 10=Configure device Device/ Opt Resource Status TAPMLB01 VARIED ON TAP01 OPERATIONAL TAPMLB02 VARIED ON TAP02 OPERATIONAL TAP03 OPERATIONAL TAPMLB03 VARIED OFF TAP04 OPERATIONAL TAP05 OPERATIONAL Chapter 9. Implementing automated tape libraries 177 Figure 131. Work with Media Library Status: Resource allocation 9.3.6 Managing multiple devices in a single 3494 The 3494 supports multiple 3490 and 3590 devices within the same physical library unit. If a system is attached to multiple devices within the same library unit and auto-configuration is enabled (*YES), one media library device description is created for each tape subsystem connection. This results in multiple media library device descriptions being created for one library unit. Support for multiple devices under a single media library device description is provided through PTFs for V3R6 and V3R7. See Informational APAR II09724 for information on the PTFs. Before you apply the PTFs, each subsystem within the 3494 library is represented in the system as a library (MLB) device with access only to those drives in its own subsystem. For example, a 3494 that contains two 3490 tape subsystems and two 3590 tape subsystems are automatically configured as shown in Figure 132 on page 178. Work with Media Library Status Type options, press Enter. 1=Vary on 2=Vary off 3=Reset resource 4=Allocate resource 5=Allocate unprotected 6=Deallocate resource 8=Work with description 9=Work with volumes 10=Configure device Device/ Current Requested Opt Resource allocation allocation TAPMLB01 TAP01 ALLOCATED ALLOCATED TAPMLB02 TAP02 ALLOCATED ALLOCATED TAP03 DEALLOCATED ALLOCATED TAPMLB03 TAP04 DEALLOCATED ALLOCATED TAP05 STAND-ALONE DEALLOCATED 178 Backup Recovery and Media Services for OS/400 Figure 132. WRKMLBSTS prior to applying PTFs in a shared environment for the 3494 This causes problems for BRMS/400 users in that a separate location in BRMS/400 must be defined for each tape drive in the system. Therefore, using this configuration, two separate locations need to be defined. Because of this, multiple device saves across multiple tape drives are not possible. After you apply the PTFs, you can have one library device description for each type of tape subsystem in the 3494. All subsystems still have a library (MLB) device description created using automatic configuration, but you now have access through any one of those descriptions to all of the drives of the same type in the 3494. Using our example of two 3490 subsystems and two 3590 subsystems in the 3494, you see that each of the AS/400 systems has four media library devices configured as shown in Figure 133. You can use the WRKMLBSTS command to display the configuration. Work with Media Library Status Type options, press Enter. 1=Vary on 2=Vary off 3=Reset drive 4=Allocate drive 5=Allocate unprotected 6=Deallocate drive 8=Work with description 9=Work with volumes Device/ Opt Drive Status Allocation TAPMLB01 VARIED ON TAP01 OPERATIONAL ALLOCATED TAP02 OPERATIONAL ALLOCATED TAPMLB02 VARIED ON TAP03 OPERATIONAL ALLOCATED TAP04 OPERATIONAL ALLOCATED TAPMLB03 VARIED ON TAP05 OPERATIONAL ALLOCATED TAPMLB04 VARIED ON TAP06 OPERATIONAL ALLOCATED Chapter 9. Implementing automated tape libraries 179 Figure 133. WRKMLBSTS after applying PTFs in a shared environment for the 3494 You can now allocate the drives so that you only have two active descriptions: one to use with your 3490 cartridges and the other to use with your 3590 cartridges. The drives can be allocated to only one library (MLB) device description at a time. If you want to separate a particular drive, you can manage this by how you allocate the drives. To share all tape resources between the two hosts, both systems should have the media library device description varied on. It is better to ignore the second media library device description rather than delete it. If you delete it and you have QAUTOCFG set to on, the media library device descriptions are re-created at IPL. 9.3.7 Selecting and varying on devices On CISC systems, use the following command to work with the tape devices: WRKCFGSTS *DEV TAP* All shared devices should be left varied off. When an AS/400 system wants to use the shared device, BRMS/400 varies it on. After the backup is complete, BRMS/400 varies it off again. BRMS/400 selects drives sequentially according to the list of BRMS/400 devices. In the preceding example, this means that it selects TAP01 and TAP02, which are both connected to the same controller before it selects TAP03 or TAP04. This may cause performance degradation. It may be better to rename the drives as TAP01, TAP03, TAP02, and TAP04. In larger installations where there is more than one media library, a naming convention such as M1.TAPL1, M1.TAPR1, M1.TAPL2, and M1.TAPR2 may be convenient. Work with Media Library Status Type options, press Enter. 1=Vary on 2=Vary off 3=Reset drive 4=Allocate drive 5=Allocate unprotected 6=Deallocate drive 8=Work with description 9=Work with volumes Device/ Opt Drive Status Allocation TAPMLB01 VARIED ON TAP01 OPERATIONAL ALLOCATED TAP02 OPERATIONAL ALLOCATED TAP03 OPERATIONAL ALLOCATED TAP04 OPERATIONAL ALLOCATED TAPMLB02 VARIED OFF TAP01 OPERATIONAL DEALLOCATED TAP02 OPERATIONAL DEALLOCATED TAP03 OPERATIONAL DEALLOCATED TAP04 OPERATIONAL DEALLOCATED TAPMLB03 VARIED ON TAP05 OPERATIONAL ALLOCATED TAP06 OPERATIONAL ALLOCATED TAPMLB04 VARIED OFF TAP05 OPERATIONAL DEALLOCATED TAP06 OPERATIONAL DEALLOCATED 180 Backup Recovery and Media Services for OS/400 On RISC systems, the media library device is used and must be varied on. The tape device is used only for stand-alone operations. For example, consider SYSTEM01 and SYSTEM02 sharing TAPMLB01, which has two drives TAP01 and TAP02. If both systems are CISC, tape devices TAP01 and TAP02 are both defined as SHARE(*YES) and varied off. If SYSTEM01 is CISC and SYSTEM02 is RISC, SYSTEM01 is still defined as before. However, SYSTEM02 has the media library device TAPMLB01 varied on, and the resources TAP01 and TAP02 are in allocated UNPROTECTED mode. If both systems are RISC, both systems have TAPMLB01 varied on as before, and the resources are allocated as UNPROTECTED. If more control is needed for complex setups where performance or some other concern needs to be addressed, the second media library device description can be used to manage the device resource independently. See Figure 134 for the resulting configuration that is displayed by the WRKMLBSTS command. In this setup, TAP01 and TAP03 are shared, while TAP02 and TAP04 are dedicated. The second system can share TAP01 and TAP03, but cannot use TAP02 or TAP04. Note: On RISC systems, BRMS/400 always selects the first varied on media library that has resources allocated if *MEDCLS is specified for the device. Figure 134. Work with Media Library Status: TAPMLB01 and TAPMLB02 varied on Work with Media Library Status Type options, press Enter. 1=Vary on 2=Vary off 3=Reset drive 4=Allocate drive 5=Allocate unprotected 6=Deallocate drive 8=Work with description 9=Work with volumes Device/ Opt Drive Status Allocation TAPMLB01 VARIED ON TAP01 OPERATIONAL UNPROTECTED TAP02 OPERATIONAL DEALLOCATED TAP03 OPERATIONAL UNPROTECTED TAP04 OPERATIONAL DEALLOCATED TAPMLB02 VARIED ON TAP01 OPERATIONAL DEALLOCATED TAP02 OPERATIONAL ALLOCATED TAP03 OPERATIONAL DEALLOCATED TAP04 OPERATIONAL ALLOCATED Chapter 9. Implementing automated tape libraries 181 9.4 Updating BRMS/400 device information After you’ve created, and if necessary, updated the media library device descriptions and tape device descriptions, you need to create BRMS/400 device descriptions. Use the following command to add new devices to BRMS/400: INZBRM OPTION(*DATA) If you want to clean up your device descriptions, you can use the following command as an alternative: INZBRM OPTION(*DEVICE) Instead of adding the new devices, this clears all the existing information and replaces it with information about the devices currently attached to the system. Note: The INZBRM OPTION(*DEVICE) command is not available in V3R1. We recommend that you print your existing media device information before you run this command to capture any changes you may have already made to existing default values. You must use the “Print Screen” utility to obtain hard copies. After you’ve created the BRMS/400 device descriptions, you need to update the device location, next volume message and tape mount delay, auto enroll media, shared device, and IDRC parameters to reflect your installation. Use the Work with Device Information (WRKDEVBRM) command to make these changes. The INZBRM command automatically creates media classes appropriate for the devices. However, at this stage, you may want to review these and create additional ones. 9.4.1 Device location It is important that the device location (as specified in the WRKDEVBRM command) reflects the true location of that device. This is especially true for media libraries that are used in random mode where the volumes and the resources should both be at the same location. If you are implementing BRMS/400 for the first time, or if you have run the INZBRM OPTION(*DEVICE) command, you should ensure that the device locations are correct. Another possibility is to leave the resources on the RISC systems deallocated and varied on. If these devices are SHARE(*YES) on CISC, BRMS/400 will vary them on and off as it processes the control groups. For each control group on RISC systems, add two *EXITs: one at the beginning to VRYCFG on or more resources in the library to ALLOCATED before any save processing begins, and one at the end to VRYCHG again to DEALLOCATED. If a failure occurs due to a time-out, the user sees it more quickly and may be able to take actions to correct the problem. This setup is particularly useful if the save window is small. Hint 182 Backup Recovery and Media Services for OS/400 If you one library is shared between AS/400 systems, the location and library name should be consistent across the AS/400 systems if you want to run movement for all systems at once. An easier option to avoid communications problems is to run movement locally on each system. This ensures that ejects are done on all library devices. See 9.5.5, “Exporting cartridges” on page 189, for more information. You should pay special attention to naming locations when you are using the 9427 tape library in split mode. Using the bonus slots in a split library configuration can result in a tape cartridge being moved to a slot in either half of the library. Since each host has access to only one half, the tape cartridge may become inaccessible to the host after the move. Normally, you create a single storage location for the tape library unit and define all of the tape drives within the frame as being in that location. However, in split mode, tapes in Magazine 1 of the library can never be selected by the accessor for loading in tape drive 2, nor tapes from Magazine 2 in drive 1. If you have a single storage location, BRMS/400 recognizes Magazine 1 and Magazine 2 as being one location and may request a tape from Magazine 1 to be mounted in Drive 2. The tapes and drives, therefore, need to be defined in separate locations. Let’s look at an example based on a RISC processor using OS/400 V3R6. A possible naming convention that you can use when using the 9427 tape library with BRMS/400 is to have the storage location for the first magazine named as MLB9427TOP and the second magazine named as MLB9427BOT using the following steps: 1. Create a new storage location, either MLB9427TOP or MLB9427BOT, as appropriate. 2. From the BRMMED menu, select option 8 (Work with device information). You find two device descriptions defined within BRMS/400. These have been generated from the OS/400 device descriptions and called TAPxx. 3. Update both entries to specify device location = MLB9427TOP (if you are on System A) or MLB9427BOT (if you are on System B) as appropriate. Although this device description is only used when you explicitly request TAPxx, for example, in a stand-alone environment where the accessor is unavailable, we recommend, for completeness, that you update this as well as the media library device. They are, in practice, the same device. 4. From the BRMMED menu, select option 9 (Work with media libraries). An entry has been automatically generated for you with a name of MLB9427. Update this entry with the specified storage location MLB9427TOP or MLB9427BOT. You have now defined Drive 1 as being located in storage location MLB9427TOP, and Drive 2 as being located in storage location MLB9427BOT. To preserve the integrity of the inventory, use the Move Tape command on the 9427 tape library front panel to move tapes to the appropriate magazine. Chapter 9. Implementing automated tape libraries 183 9.5 Managing cartridges in the media library device Any OS/400 command that has a VOL parameter will cause the cartridge identifier specified to be mounted. If the cartridge identifier does not match the logical volume identifier for standard labeled tapes, a message is issued. All AS/400 tapes are initialized with the volume identifier matching the cartridge identifier. When you use BRMS/400 with non-barcode reader libraries, you should be careful when you initialize blank tapes. See 9.5.1, “Special cartridge identifiers” on page 184, for more information. The easiest way to find existing cartridges for use in the media library device inventory is to use the Work with Media Library Media (WRKMLMBRM) command. For example, use the following command to display a complete inventory of cartridges and volume identifiers and their status as shown in Figure 135: WRKMLMBRM DEV(TAPMLB01) Figure 135. Work with Media Library Media You can see a similar display by using the OS/400 Work with Tape Cartridges (WRKTAPCTG) command. You need to use F11 on the Work with Tape Cartridges display to view the category, density, and other information that appears on the WRKMLMBRM display. Work with Media Library Media Media library . . . . . . : TAPLIB01 Position to . . . . . . . . Starting characters Type options, press Enter. 1=Add MLB media 2=Work with media 5=Initialize 6=Change category 7=Eject 8=Mount 9=Demount ---BRM Information--- Opt Volume Category Media Class Expired Status __ CLN001 *SHARE400 *NONE Available __ VOL002 *SHARE400 FMT5GB *YES Available __ VOL003 *INSERT FMT5GB Available __ VOL004 *SHARE400 FMT5GB Available __ VOL005 *INSERT FMT5GB Available __ VOL006 *SHARE400 FMT5GB Available __ VOL007 *SHARE400 FMT5GB *YES Available __ VOL008 *SHARE400 FMT5GB Available If the system name is changed, all cartridges in the associated categories become unavailable until a categroy is created with the previous system name. Cartridges in the *NOSHARE category that belong to that system are not accessible. We highly recommend that you remove all cartidges from the media library device or change them to the *SHARE400 category prior to changing the system name by using a media class that is SHARE(*YES). Hint 184 Backup Recovery and Media Services for OS/400 9.5.1 Special cartridge identifiers Every cartridge and volume ID can contain the following characters: A through Z, 0 through 9, $, and @. Only the first six characters are recognized by OS/400. Therefore, the uniqueness of the cartridge ID must be within the first six characters of the name. For libraries with a vision system (for example, the 3494 Automated Tape Library Data Server and the 9427 tape library), the first six characters of the cartridge ID should match the volume ID for the tape. For libraries without a vision system, which includes the 3590 tape device and 3570 Magstar MP tape subsystem, specially generated cartridge IDs have been implemented on RISC systems: NLTxxx Non-Labeled Tape: This cartridge contains data written in non-Standard tape label format. CLNxxx Cleaning: This cartridge has been identified as a cleaning tape. BLKxxx Blank: This cartridge contains no data. UNKxxx Unknown: This cartridge is not identifiable. IMPxxx Import: This refers to the cartridge that is in the Priority slot. SLTxxx Slot: This refers to the cartridge by its slot number. This only occurs if the device description is created with the GENCTGID parameter set to *SYSGEN mode (see 9.3.2, “Creating media library device descriptions” on page 168) and is not appropriate for BRMS/400, which refers to media by volume identifier. When using BRMS/400 with non-barcode reader libraries, take care when initializing blank tapes. The system generates a volume ID of BLK001 and so on. BRMS/400 users should never initialize cartridges to these IDs, whether through a BRMS/400 or OS/400 command. If a real volume exists in the library with ID BLK001 and a new tape is added that causes OS/400 to generate another BLK001, you receive an instant duplicate. Further, BRMS/400 thinks that every new BLK001 is that same original BLK001 and tries to use it for saves and so on. A similar situation can occur when you add an already known cartridge to the library through the priority slot using CHECKVOL(*NO). The cartridge is moved to a slot in the ACF, but the cartridge identifier remains IMP001. See 9.5.4, “Importing cartridges” on page 187, for more information on importing cartridges. You must not use the ADDMLMBRM command to initialize these tapes. You must first put the media library device into auto-mode and use the ADDMEDBRM command to add the cartridges. Once the cartridges are added, you should use the MOVMEDBRM command to place the cartridges in the correct library location. You must hold the library using the WRKMLBBRM command to do this and release the library when you have completed the move operation. Chapter 9. Implementing automated tape libraries 185 9.5.2 VOL(*MOUNTED) usage Prior to V3R6, the library device commands were directed to a specified tape device. Beginning with V3R6, library device commands are issued to a media library device. If the media library device has all of the available tape resources loaded with media, it is meaningless to use VOL(*MOUNTED). For V3R6 and later, the VOL parameter is required when you issue a command to a media library device. If VOL(*MOUNTED) is specified, the system returns an error. This should not really concern BRMS/400 users. If, for example, a scratch volume is required, BRMS/400 suggests a volume from its own scratch pool based on the media class selected for the save operation. This is passed to the media library and the specified volume loaded. If you ever receive a *MOUNTED volume not correct type of message, it usually means that BRMS/400 has run out of tape volumes to suggest at that location. You should check for the tape volumes by using the WRKMEDBRM command and the WRKMLMBRM command. Ensure that volumes are available at the location of the library. 9.5.3 End option (ENDOPT) setting The major design change for RISC is that media library devices have been implemented to support multiple concurrent users. Commands are issued to the media library device specifying a cartridge identifier. If the cartridge and a tape resource are available, the cartridge is mounted on that tape resource and command processing begins. BRMS/400 always selects the first varied on media library device that suitable resources allocated if *MEDCLS is specified for the device. If no tape resource is available, the request is queued in a first-in, first-out basis with a priority and time limit. The time limit is specified by the MAXDEVTIME parameter in the media library device description. The priority is based on the run priority of the job attributes. The priority is referenced when a request for a tape resource is made. Changing the priority of the job after the request has been queued does not affect the current request, only subsequent requests. Commands that require multiple volume mounts generate multiple media library When you use such libraries as the 3590 tape device and 3570 Magstar MP tape subsystem, avoid initializing new cartridges when the libraries are in random mode. This is because of the danger of initializing media to a system generated identifier. You should put the device into automatic mode, and on RISC systems, vary off the library description and vary on the device description. You should also use the WRKMLBBRM command and put the library on hold in BRMS/400 when using the media library in automatic or manual mode. Otherwise device failures occur. If possible, it may be convenient to purchase extra magazines for the 3570 Magstar MP tape subsystem and use one or two to load 20 new cartridges at once for initialization. Use the ADDMEDBRM command to add the 20 tapes. Once the tapes are initialized, the device can be returned to random mode. You can use the MOVMEDBRM command to move the cartridges into the library location. Hint 186 Backup Recovery and Media Services for OS/400 requests. Changing the run priority affects the priority of the requests for subsequent tape resource operations. On RISC systems, the End option (ENDOPT) parameter has a significant affect on the operation of the media library device. End options on OS/400 commands include *REWIND, *UNLOAD, and *LEAVE: • *REWIND: At the end of command processing, the cartridge is rewound and left loaded in the tape resource. At this point, the tape resource is available for other media library requests. If the next request requires a different cartridge, the present cartridge is unloaded, and the new cartridge is mounted. BRMS/400 SAVxxxBRM commands have a default of *REWIND. • *UNLOAD: At the end of the command processing, the tape resource is unloaded, and the cartridge is demounted. At this point, the tape resource is available for other media library requests. At the end of every BRMS/400 control group, *UNLOAD is issued. • *LEAVE: At the end of command processing, the media is positioned at the last point accessed. The tape resource is only available to commands to the same cartridge identifier (or, as long as the resource is not in use, to commands that require a tape resource but do not mount tapes (for example, WRKTAPCTG)). BRMS/400 uses the option of *LEAVE when a control group performs multiple save operations. In a multiple user environment, take care when using *LEAVE. Consider the situation where SYSTEM01 and SYSTEM02 share the same library that contains two drives: • SYSTEM01 processes a control group containing some saves with *EXIT processing in between. • SYSTEM01 starts the save to cartridge XYZ001, and the user changes the ENDOPT parameter to *LEAVE. • SYSTEM01 performs *EXIT processing for some minutes before the next save starts. • At that point, if SYSTEM02 attempts to access the drive, it will fail. However, if another job on SYSTEM01 is queued and requesting cartridge XYZ001, that job can interrupt the job that did the *LEAVE processing and steal the resource and the cartridge. • When job 1 on SYSTEM01 resumes saving after the *EXIT, it finds the cartridge is not available and tries another cartridge, perhaps mounting it on the second drive. In this way, it is possible to have one control group with APPEND(*YES) to end up on two different cartridges. You should be aware of this when you queue up save jobs on the same system. If necessary, change the media class so that the same cartridge ID is not requested (BRMS/400 does not know which cartridge is being used in a control group that is in progress until the save completes and the media information is written). In other words, *LEAVE processing holds the cartridge to the resource for that system; it does not lock the resource to the job. Chapter 9. Implementing automated tape libraries 187 9.5.4 Importing cartridges When you return expired cartridges to the media library, or when you add new cartridges, the most obvious way is to open the door and remove the magazine. If the library is in random mode, this causes a re-inventory of the library. For this reason, the 3494 Automated Tape Library Data Server has a convenience I/O station for importing and exporting cartridges without stopping any automatic operations. The 3590 with automated cartridge facility and 3570 Magstar MP tape library provide a convenience or priority slot. The 9427 tape library does not have a convenience station, so you can only import by halting automation and opening the door to access the cartridge slots. In V3R1 and V3R2, the priority slot of the 3590 with automated cartridge facility and the 3570 Magstar MP tape library has a simple implementation. The cartridge in the priority slot is assigned a generated identifier of IMP001 (actually IMPxxx where xxx is the next available IMP number). The commands for this cartridge can either reference IMP001 or the actual volume identifier in the VOL parameter. Upon issuing the command, the cartridge is moved from the priority slot to the device. When the device is unloaded, the cartridge is returned to the priority slot for removal. V3R6 and V3R7 enhances the library support to provide full import capability from the priority slot to the device or ACF inventory. Because there is often more than one cartridge to be imported, we still recommend that you physically replace the cartridges in a magazine and then re-inventory the magazine. Cartridges that have been imported into the library remain in the *INSERT category until they are enrolled into BRMS/400. To do this, you can either use the Work with Media Libraries (WRKMLBBRM) command and type option 11 next to the required library or directly use the Add MLB Media using BRM (ADDMLMBRM) command. The tapes must already be initialized if it is a non-barcode reading media library (Figure 136 on page 188). The Check Tape (CHKTAP) command is the only OS/400 command that defaults to *LEAVE. OS/400 does not honor the *LEAVE processing of the CHKTAP command unless a file is specified by file name or sequence number. This prevents the CHKTAP commands from locking up all of the tape resources when you use the command defaults. Note 188 Backup Recovery and Media Services for OS/400 Figure 136. Add MLB Media using BRM display 9.5.4.1 Re-activating enrolled tapes after re-inventory If cartridges already enrolled in BRMS/400 are added to the magazine, they are in the *INSERT category after re-inventory. To make them usable for the operations, the category has to be changed. To activate these cartridges, use the ADDMLMBRM command, but change the Add volume to BRM field to *NO. This changes the category, and the cartridges are available for use. If the shared media attribute in the media class is *NO, the category is changed from *INSERT to *NOSHARE. Otherwise, the category is changed to *SHARE400. For such libraries as the 3494 Automated Tape Library Data Server where the cartridges are physically moved to a storage cell location by Library Manager, a single ADDMLMBRM command changes all volumes with the *INSERT category. 9.5.4.2 Enrolling new tapes into BRMS/400 If you need to enroll new volumes into the BRMS/400 media inventory, you can use the default value for the VOL parameter (*INSERT) and change the Add volume to BRM field to *YES; all volumes that were previously in the *INSERT category are enrolled into the BRMS/400 media inventory and are available for use. You should supply the media class for the MEDCLS parameter on the ADDMLMBRM command. 9.5.4.3 Missing cartridges When BRMS/400 requests a tape mount of a cartridge that is not in the library, and this cartridge is placed in the priority slot, the cartridges already in the library are checked, followed by the cartridge in the priority slot. If this cartridge is the required one, it is imported into the library. It is also possible to import a cartridge to the library using the OS/400 Add Tape Cartridge (ADDTAPCTG) command, for example: ADDTAPCTG DEV(TAPLIB01) CTG(TAPE01) CGY(*SHARE400) CHKVOL(*YES) However, we recommend that you do not use this technique when using BRMS/400 enrolled tapes. In this case, you lose the benefits of the ADDMLMBRM command, which allows you to run the ADDTAPCTG and ADDMEDBRM commands and moves the media to your library using the MOVMEDBRM command through one simple command. Add MLB Media using BRM (ADDMLMBRM) Type choices, press Enter. Media library device . . . . . . > TAPLIB01 Name Volume identifier . . . . . . . *INSERT Character value, *INSERT + for more values Add volume to BRM . . . . . . . > *YES *NO, *YES Initialize tape . . . . . . . . *NO *NO, *YES Media class . . . . . . . . . . > FMT3590 CART3490E, QIC120... Last moved date . . . . . . . . *NONE Date, *NONE Move policy . . . . . . . . . . *NONE *NONE, OFFSITE Chapter 9. Implementing automated tape libraries 189 When you use the ADDTAPCTG command, and if the cartridge ID is not found, OS/400 searches the device starting with the priority slot and any cartridge IDs with the volume ID of *UNKNOWN. When the cartridge in the priority slot is loaded, it is found to be TAPE01. The cartridge identifier is changed to TAPE01, and the cartridge is added to the *SHARE400 category. When the cartridge is unloaded (ENDOPT(*UNLOAD)), the cartridge is moved to the ACF. 9.5.5 Exporting cartridges Cartridges that are due to move from the media library, perhaps to an off-site store, need to be “exported” from the library. That is, they need to have their category changed to *EJECT and need to be physically removed from the library. All media library devices use the Remove Tape Cartridge (RMVTAPCTG) command to change media to the *EJECT category and, where possible, physically eject it from the library. BRMS/400 uses this command when doing the movement from tape library locations to your default *HOME location. The WRKMLMBRM command also uses this option to perform the ejects. For the 3590 tape device and 3570 Magstar MP tape subsystem on CISC systems, the cartridges are left in the device in the *EJECT category. This also applies to 9427 tape library on CISC or RISC since the 9427 tape library does not have a convenience station. For all libraries, except the 9427 tape library on RISC systems, and for 3494 Automated Tape Library Data Server on CISC, the cartridges are moved to the convenience station. If more cartridges exist than the convenience station can hold, the additional cartridges are queued by the media library device for ejection. When BRMS/400 movement is run, it causes a volume to move from the library. A RMVTAPCTG command is also issued to eject the cartridge. As long as the system that runs the MOVMEDBRM command is attached to the library (in other words, it recognizes the media library device description), it can take action on the RMVTAPCTG command and eject the cartridge. If there are multiple systems and multiple libraries attached to different systems and the MOVMEDBRM command is run on one system only, the BRMS/400 files are updated around the network, but the cartridges are not ejected from the remote libraries. To ensure cartridges are ejected, run MOVMEDBRM on those systems that are attached to the libraries. We recommend you run the MOVMEDBRM command individually on all systems in the network. 9.6 Restricted state automation for the 3494 V3R6 and V3R7 no longer use MLDD for running the 3494; the subsystems associated with it are no longer necessary. This change allows for automation to work in a restricted state once the device descriptions exist and QUSRSYS is installed, for example, when you perform a system recovery. Four files in QUSRSYS are required for complete automation of the media library devices: QATAMID, QLTAMID, QATACGY, and QLTACGY. If these files do not exist on the system, a limited set of automation function is supported. Cartridges can be mounted by specifying the cartridge identifiers in the VOL parameter of the OS/400 commands. This subset of automation does not support the use of the cartridge commands such as WRKTAPCTG, DSPTAPCTG, and so on. 190 Backup Recovery and Media Services for OS/400 CISC systems rely on MLDD for full 3494 function. However, this is not available when the system is in a restricted state so tapes must be loaded in another way. One way is to use the 3494 Automated Tape Library Data Server in stand-alone mode as mentioned earlier. Another way is to mount a specific category to use the Mount Category (MNTCTGMLD) command and allow the 3494 to load the tapes automatically in that category. This means that it is possible to run SAVSYS followed by SAVLIB to a 3494 while it is in restricted state for the entire save. The steps required for this process are outlined here: 1. Hold the QSYSOPR message queue so that it does not interrupt the save. 2. Create a temporary tape category and add volumes to it in the order that BRMS/400 expects them. 3. Rename the QMLDSBS subsystem so that BRMS/400 cannot restart it after the SAVSYSBRM command completes. 4. Save the system using the SAVSYSBRM command and specify STRCTLSBS(*NO). 5. Save QGPL, QUSRSYS, QUSRMLD, and QMLD. 6. Rename the QMLDSBS subsystem back again. 7. Run the following command to restart MLDD: INZMLD *START 8. Change the media category back to *NOSHARE. Delete the temporary category. See Appendix D, “Performing restricted saves to a 3494 on CISC” on page 305, for sample programs on automating this (should only be used on CISC systems). 9.7 Using a tape resource as a stand-alone unit (RISC) To use a tape resource as a stand-alone device, deallocate the tape resource from the media library device. To deallocate all resources, simply vary off the media library or select option 6 (DEALLOCATE) on the Work with Media Library Status (WRKMLBSTS) display for each resource that needs to be deallocated. Once the resource is deallocated, it is a free resource for any device description. Tape device descriptions are auto-configured for the tape resources, but are not varied on. The WRKCFGSTS *DEV *TAP command displays the current tape device descriptions that exist on the system. Find the device description that corresponds to the tape resource, and vary on the tape device description. Alternatively, use the WRKMLBBRM command to hold the media library device. Now, commands can be sent to this device description, but no library functions occur. Many media library devices provide modes or commands to move media to the device during a stand-alone operation. The 9427 and 3590 media library devices both support modes that are used for stand-alone devices. The 9427 provides a sequential mode where cartridges are moved to the device automatically in sequence from the inventory. The 3590 has three modes for stand-alone mode: auto, manual, and accumulate. The Library Manager software on the 3494 supports the stand-alone mode from the command pull-down on the Library Manager console. In this mode, the operator can mount a tape from the I/O station or from the inventory either by volume identifier or by a category. The 3570 has an automatic mode, which loads cassettes from right to left. In the manual mode, the cassettes are loaded in the furthest right slot. © Copyright IBM Corp. 1997, 2001 191 Chapter 10. Recovery using BRMS/400 This chapter deals with the most important function of BRMS/400, which is recovery. The main objective is to describe recovery of a complete system and identify the key differences between the CISC and RISC BRMS/400 releases so that you can plan accordingly. This chapter also covers the recovery of individual objects and libraries. The intent of this chapter is not to provide you with step-by-step instructions on how you should recover your AS/400 system. For this, you must use the BRMS/400 Recovering Your Entire System report for a guide to the recovery steps for your specific installation. Every effort has been made in BRMS/400, in the recovery manuals, and in this redbook to ensure that the recovery information is complete. However, the only way to know that you have a recoverable system is to try it. If it is not already part of your operational processes, we strongly recommend that you schedule a full disaster recovery test as soon as possible and on a regular basis thereafter. If you can recover your complete system to another AS/400 system, it is most likely that you can recover all or any part of your system to your own AS/400 system. Recovery is often viewed as an inevitable consequence of backup. This is not necessarily true. Recovery is only as good as your backup strategy. It is vital that you consider your business recovery requirements before you design your backup. There are two major factors to consider: data and timing. Of course, there are other factors such as people, skills, facilities, and processes. However, it is not within the scope of this redbook to cover these aspects of recovery planning. You can secure you data by making sure you have complete and up-to-date backups, including recovery documentation and regularly moving backups off-site or to a secure location. Timing needs to be addressed in the design of your backup. For example, it may be necessary to recover a critical application and restart the business before any other recovery is undertaken. Using backup lists to secure critical files, documents, spooled files, in addition to the main libraries for this application, can facilitate this. You may have to ensure that unnecessary recovery of incremental saves does not occur. Rebuilding access paths is also time consuming. Where possible, you should have access paths and their associated files in the same libraries and save the access paths. 10.1 Overview of BRMS/400 recovery The basic recovery “tool” in BRMS/400 is the Start Recovery using BRM (STRRCYBRM) command. This not only performs the recovery, but is regularly used to print reports to help you manage the recovery. Printing recovery reports is also an option during maintenance. 192 Backup Recovery and Media Services for OS/400 The Start Recovery using BRM command can be selected by using the menus (option 4 (Recovery) from the Main menu) or by typing the command directly. In BRMS/400 V3R6, V3R2, and V3R7, you should take care when printing the recovery report using the menus since the default has been changed to *RESTORE. The default is still *REPORT in the STRRCYBRM command. See Appendix A, “Summary of changes” on page 289, for information on the functional enhancements between the BRMS/400 releases. Figure 137 shows the main parameters of the STRRCYBRM command. Figure 137. Start Recovery using BRM (STRRCYBRM) The numbers in reverse bold that follow correspond to the numbers in reverse shown in Figure 137: 1 To recover a specific control group, enter *CTLGRP. You are prompted for the control group name. If the restore is halted or fails, you can restart the recovery from the point of failure by specifying *RESUME here. With the *RESUME option, no other parameters are shown. 2 If the latest backup was to a save file that still exists, setting this parameter to *YES includes the save file in the recovery report. If you want to exclude this latest save (for example, recover only from off-site tapes), setting this parameter to *NO will cause BRMS/400 only to use information from the tape. 3 You can recover from a location (for example, an older copy now in the vault). You can specify up to 10 locations. Start Recovery using BRM (STRRCYBRM) Type choices, press Enter. Option . . . . . . . . . . . . . *SYSTEM 1 *SYSTEM, *SAVSYS, *IBM... Action . . . . . . . . . . . . . *REPORT *REPORT, *RESTORE Time period for recovery: Start time and date: Beginning time . . . . . . . . *AVAIL Time, *AVAIL Beginning date . . . . . . . . *BEGIN Date, *CURRENT, *BEGIN End time and date: Ending time . . . . . . . . . *AVAIL Time, *AVAIL Ending date . . . . . . . . . *END Date, *CURRENT, *END Use save files . . . . . . . . . *NO 2 *NO, *YES Volume location . . . . . . . . *ALL 3 *ALL, *HOME, COMPROOM... + for more values Library to omit . . . . . . . . *DELETE 4 *DELETE, *NONE From system . . . . . . . . . . *LCL 5 Chapter 10. Recovery using BRMS/400 193 4 You can specify that you do not want to recover libraries that have been deleted after the save to which you are now recovering. That not only helps recovery time but also reduces catch-up time. If you created libraries, but have not run maintenance before deleting them again, these libraries are still marked for recovery. You must run maintenance between creating the library and deleting it to have it omitted. One way around this is to manually delete the library from the WRKMEDIBRM displays. 5 You can specify the system name and remote location of another system in your BRMS/400 network from which to restore media information. 10.1.1 Synchronizing maintenance, movement, and recovery reports It is important that you schedule maintenance, media movement, and recovery report creation correctly to ensure they are synchronized. There are circumstances that lead to a recovery report being out-of-date within a few hours of its creation unless this happens. These include: • Performing a save to save files and running maintenance (with recovery report) after the save. The report is only accurate as long as the save files exist. If you run the Save Save Files using BRM (SAVSAVFBRM) command and move the data to tape, you should reproduce the report. You should also save the BRMS/400 recovery data again using the SAVMEDIBRM command. • If move policies contain Verify moves *YES, and the MOVMEDBRM command is run during maintenance followed by the recovery report, the recovery report shows the media as in their current location. As soon as verification takes place, the moved media will appear in their new location, and the recovery report is out of date. The recovery report should be run after verifying the move. • In a network situation, the recommendation is to run the Move Media using BRM (MOVMEDBRM) command separately on each system. However, running the MOVMEDBRM command once on a single system, which If you have duplicated your backup tapes using the Duplicate Media Using BRM (DUPMEDBRM) command, it is good practice to move these tapes within BRMS/400 to a separate BRMS/400 location. This location does not need to be physically separate from the main secure location. 1. Use the MOVMEDBRM or VFYMOVBRM commands to update the BRMS/400 database. 2. Run the SAVMEDIBRM command to save the BRMS/400 recovery information. 3. Move this to the same location and run the MOBMEDBRM command or the VFYMOVBRM command to update the BRMS/400 database. BRMS/400 now recognizes that the recovery media and the BRMS/400 recovery information is at the separate location. 4. Use the STRRCYBRM command to run a recovery report using volumes from that location. Make sure that the report is also stored in the location with the tapes. You now have the information to recover using the duplicated tapes. Note 194 Backup Recovery and Media Services for OS/400 propagates the information around the network, is a satisfactory solution for many installations. Media movement is often performed after maintenance has been run on each system and the recovery reports produced. The recovery reports should be run on each system after the MOVMEDBRM command has been completed for the network. • Unlike Backup Control Groups, Archive Control Groups do not have a parameter for automatically backing up media information. You should, therefore, ensure that you run the SAVMEDIBRM command after archiving to save the recovery data. Make sure you change the default in the SAVMEDIBRM command to *OBJ because you must save the recovery information at object level to retrieve archived objects such as spooled files. As a general rule, saving recovery data should always be done at the end of processing. QUSRBRM is frequently saved early in the backup cycle. It is important for recovery to have the most up-to-date recovery information. To actually perform the recovery, either use the STRRCYBRM command with the action parameter of *RESTORE or use the BRMS/400 recovery menus. 10.1.2 Recovery from a central point Recovery using backup control groups requires access to media content information (QA1AHS) in the QUSRBRM library. If you have a complete system failure, this information is no longer available, and you need to restore the latest QUSRBRM recovery data to perform the recovery. Depending on the frequency of saves and the timing of the failure, the restored information may not be current (Figure 138). Figure 138. Receive media information on SYSTEM05 Beginning with V3R6, V3R7, and V3R2, a new parameter, Receive Media Information (circled in Figure 138) has been introduced on the Change Network Group display (option 4 from the System Policy menu). This gives you the ability to specify for a system or systems, whether you want to receive media content information from the other systems in the network group. If you use this feature, you have the recovery information at a central point and can create recovery reports for other systems, if necessary. Change Network Group SYSTEM05 ITSCNET Network group . . . . : *MEDINV Position to . . . . . Text . . . . . . . . . Centralized media network systems Receive media info . . *LIB *NONE, *LIB Type options, press Enter. 1=Add 4=Remove 8=Set time Remote Local Remote Receive Opt Location Name Network ID Media Info Status SYSTEM09 ITSCNET *NONE Active Chapter 10. Recovery using BRMS/400 195 10.2 Recovering an entire system (starting with lIcensed Internal Code) Recovering the entire system is required if you need a scratch installation for disaster recovery or if the load source disk unit in the system ASP is damaged and needs to be replaced. This assumes that you have no disk protected enabled such as device parity protection (RAID5) or mirroring. Note: When you recover from SAVSYS or IBM distribution tapes, your tape libraries must not be in random or library mode. See the appropriate Operator's Guide for the library in use to set the correct mode. 10.2.1 Preparation for the recovery process When you are planning your restore process, you need to ensure that you have the appropriate documentation and the tape volumes, including any special recovery tapes that may be required during the recovery process. For example, if you are restoring your system after a load source disk failure, and if you are running on a CISC AS/400 system, you need the appropriate MULIC or FULIC tape during the recovery. Remember that MULIC or FULIC tapes are not required for RISC AS/400 systems. Helpful information is available in the following documentation: • Backup and Recovery - Basic, SC41-4304: Keep a copy of this documentation close to you while doing recovery. • Backup and Recovery - Advanced, SC41-4305 • Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) • Automated Tape Library Planning and Management, SC41-5309 • We recommend that you keep a copy of the Operator's Guide and the Installation and Planning Guide for all of your tape libraries that are used for recovery. You may need to refer to these documentation. Your starting point in the recovery process should always be with the Recovering Your Entire System report (QP1ARCY) produced by BRMS/400. This report identifies the first volumes that are needed during the recovery process. You should use the Recovery Volume Summary Report (QP1A2RCY) that is produced together with the System Recovery Report to identify the locations for the required tape volumes. We used the 3494 library device as a topic to discuss the recovery of an entire system. The intent of Table 4 on page 196 and subsequent sections is to provide an overview of the steps that are required when restoring your CISC AS/400 system or the RISC AS/400 system. You must not use this section as a checklist to perform your system recovery. You must always use the BRMS/400 Recovering Your Entire System report along with Backup and Recovery - Basic, SC41-4304, to guide you through the correct recovery steps based on your OS/400 release. Important 196 Backup Recovery and Media Services for OS/400 Table 4 contains a summary of the steps that are required for a full system recovery. Tape automation requires a minimum level of system functions to be recovered before automatic cartridge mounting can occur. In general, 3494 tape automation can occur after the configuration data is restored. For V3R6 and V3R7, tapes can automatically be mounted for you starting with step 3a, recover the BRMS licensed program, if the media library device is auto-configured, or created by the user. You have to specify the media library name and the volume name on the restore commands to mount the tapes automatically. Table 4. AS/400 recovery steps (using BRMS/400 and the 3494) BRMS/400 recovery step Description Command 1 Recover Licensed Internal Code Control panel function (02-D IPL) in manual mode Function code 24 for CISC Install Licensed Internal Code menu for RISC • Option 2 if restoring on a different system • Option 3 if restoring to the same system. 2a Recover operating system IPL or install the system menu (option 2) 2b Perform disk configuration. Refer to Backup and Recovery - Advanced, SC41-4305, if you plan to configure disk protection or user ASPs. In V3R7, this information is in Backup and Recovery - Basic, SC41-4304. 3a Recover BRMS/400 licensed program and data. You may also need to recover libraries named Q1ABRMSFnn. These libraries will be listed in your BRMS/400 Recovery Report. •RSTLIB QUSRBRM •RSTLIB QBRM •RSTLIB Q1ANRMSFnn 3b* Recover MLDD, 3494 library driver code (not needed for RISC AS/400 systems). • RSTLIB QMLD (for 3494 only) • RSTLIB QUSRMLD (for 3494) 3c Recover OS/400 Media and Storage Extensions. RSTLIB QMSE 4 Recover BRMS recovery data. RSTOBJ *ALL QUSRBRM 5 Recover user profiles. STRRCYBRM *SYSTEM *RESTORE Ensure that *SAVSECDTA has recovered. 6 Recover BRMS/400 required system libraries. •RSTLIB QGPL •RSTLIB QUSRSYS •RSTLIB QSYS2 7 Recover configuration data. STRRCYBRM *SYSTEM *RESTORE 8* Recover IBM product libraries. STRRCYBRM *IBM *RESTORE 9* Recover user libraries STRRCYBRM *ALLUSR *RESTORE 10* Recover document library. STRRCYBRM *ALLDLO *RESTORE 11* Recover objects in directories. This option is not available on V3R1 and V3R6. See 10.4, “Restoring the integrated file system” on page 208, for details. STRRCYBRM *LNKLIST *RESTORE 12* Recover spooled files WRKSPLFBRM Chapter 10. Recovery using BRMS/400 197 10.2.2 Setting up the tape device for SAVSYS recovery If you have an automatic cartridge loader or equivalent, insert the cartridges in the correct sequence. Ensure the media library devices are in the correct mode. For devices other than the 3494, this is automatic/sequential mode or manual mode. See the device documentation on how to properly change the mode for the hardware. The random or library mode cannot be used until OS/400 is loaded. For the 3494 Automated Tape Library Data Server, use Library Manager to set up the library as a stand-alone device to mount your SAVSYS tape or the Licensed Internal Code distribution tape. See 8.4, “Library Manager for the 3494” on page 160, for more information on setting up stand-alone mode. If you have multiple tape devices inside the 3494 or multiple 3494s, the device names may vary between the different systems. Notice that for each AS/400 system, the tape drive name and the corresponding device name of the library manager for this device name has to be selected in the stand-alone mode, for example: AS/400 AS/400 Library Manager System Tape Device Device Name SYSTEM01 TAP01 170 SYSTEM02 TAP01 170 SYSTEM03 TAP01 180 SYSTEM04 TAP01 180 SYSTEM01 and SYSTEM02 share the same device, and SYSTEM03 and SYSTEM04 share the other device. 10.2.3 Recovering the Licensed Internal Code and operating system When you recover from a complete system loss, follow the steps in the Recovering Your Entire System report produced by BRMS/400. You must also follow the steps documented in Backup and Recovery - Basic, SC41-4304. 13a* Apply journal changes Refer to Backup and Recovery - Basic, SC41-4304, for information on how to apply journal changes. 13b* End subsystems ENDSBS SBS(*ALL) OPTION(*IMMED) 14* Recover authorizations RSTAUT 15* Return the system to normal mode. PWRDWNSYS OPTION(*IMMED) RESTART(*YES) Notes: Step 3b is only required for the 3494 library device with V3R1 or V3R2. Step 8 through step 15 require no manual intervention. BRMS/400 recovery step Description Command 198 Backup Recovery and Media Services for OS/400 Select 02-D IPL in Manual Mode on the AS/400 control panel to load the Licensed Internal Code and OS/400 from the alternate IPL device. Select the required option (function code 24) on the front AS/400 control panel for systems at V3R1 and V3R2. For systems with V3R6 and V3R7, on the Install Licensed Internal Code display, select option 2 if you are recovering to a different system or option 3 if you are recovering to the same system as detailed in Backup and Recovery - Basic, SC41-4304. Step 1 in the following example assumes that you are using V3R1 or V3R2: STEP 1: Recover Licensed Internal Code Use media shown in the this example and the procedure for "Recovering the Licensed Internal Code" using function code 24 in the book in chapter 10. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) *SAVSYS *FULL 01 6/04/00 20:35:20 0 1 RHSAVSYS ABC011 After you install the Licensed Internal Code, select Install the Operating System on the OS/400 installation menu: STEP 2: Recover operating system. Use the media shown here and the procedure for "Restoring the Operating System using the Complete Restore Method” as detailed in the Backup Recovery - Basic book. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) *SAVSYS *FULL 01 6/04/00 20:35:20 0 1 RHSAVSYS ABC011 The disk units may be in non-configured status for different reasons so they have to be configured. Refer to Backup and Recovery - Advanced, SC41-4305, if you plan to configure disk protection or user ASPs. For V3R7, the disk configuration information is in Backup and Recovery - Basic, SC41-4304. The first sign-on to the system uses no password. Shortly after that, you are asked to change the password for QSECOFR. The display asks for the old password, as well as a new password. The old password for QSECOFR is set to the default value, QSECOFR. This new password display is only for V3R2 and V3R7. On V3R1 and V3R6, no password is required at this time. To see which tape devices are configured on your system, use the following command: WRKCFGSTS *DEV TAP* The following steps are for your reference only. You should not use them to perform your system recovery. You must always use the System Recovery Report produced by BRMS/400, which is customized to your system environment, along with Backup and Recovery - Basic, SC41-4304. Depending on your OS/400 release, the actual step numbers outlined here can change. In addition, reference to any documentation can also change with each release. Where possible, we attempted to identify the key differences between the various releases of OS/400. Important Chapter 10. Recovery using BRMS/400 199 10.2.4 Recovering BRMS/400 and system information When the installation of the Licensed Internal Code and operating system has completed, the BRMS/400 product and associated libraries must be recovered before you can use the product to perform other recovery operations. At this point on RISC systems, if you had auto-configure on, you can go to random mode on your 3494. You can use the CFGDEVMLB command to update the Robot device name (ROBOTDEV) parameter for the 3494 Automated Tape Library Data Server. See 9.3.3, “Creating a Robot Device Description (ROBOTDEV) for the 3494” on page 170, for more information. Note: On the restore command, you need to specify the media library device and the volume identifier to have the tapes mounted automatically. On CISC systems, you must wait until after step 7 when you have restored the configuration data before you can go to random mode. If you prefer to wait until after step 7 for RISC systems, you should DEALLOCATE the library resource and vary on the tape device. When step 7 is completed, vary off the tape device and vary on the library resource as ALLOCATED (UNPROTECTED). For token-ring attached libraries, you need to vary on the token-ring line. STEP 3: Recover the BRMS/400 product and associated libraries. The BRMS/400 product and associated libraries must be recovered before you can use the product to perform other recovery operations. Use WRKCFGSTS *DEV *TAP to see which tape devices are configured. Then run RSTLIB for each of the following libraries specifying SEQNBR and using media as shown below. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) QUSRBRM *FULL 01 6/04/00 20:49:29 143 13 RHBRMS ABC017 QBRM *FULL 01 6/04/00 20:50:09 823 14 RHBRMS ABC017 QUSRMLD *FULL 01 6/04/00 20:50:54 8 15 RHBRMS ABC017 QMLD *FULL 01 6/04/00 20:50:56 375 16 RHBRMS ABC017 QMSE *FULL 01 6/04/00 20:51:10 5 17 RHBRMS ABC017 Use the sequence number for the following restore so you are sure to restore the correct objects in case there is more than one item of QUSRBRM on that tape. Using the sequence number also improves performance if you are using a 3590 tape device. STEP 4: Recover BRMS/400 related media information. You must recover this information for the BRMS/400 product to accurately guide you through remaining recovery operations. To do so, run RSTOBJ OBJ(*ALL) SAVLIB(QUSRBRM) MBROPT(*ALL) specifying library name, SEQNBR, and using media as shown below. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) QUSRBRM *QBRM 01 6/05/00 8:21:30 10 1 RHBKUGRP ABC592 Before you recover the user profiles, clear the BRMS/400 device and media library information, and initialize the files with the tape devices currently configured on the system. Use the command: INZBRM OPTION(*DEVICE) Verify your BRMS/400 device and media library information for the correct settings (for example, next volume message, densities, device location, shared devices, and so on). Some of the values are reset to the defaults when you use the INZBRM OPTION(*DEVICE) command. 200 Backup Recovery and Media Services for OS/400 Note: On the V3R6 recovery report, there is no mention of INZBRM OPTION(*DEVICE) in step 5 even though it is available. On V3R1 systems, you have to use the WRKDEVBRM command to verify the device information. STEP 5: Recover user profiles. Before recovering user profiles, use the INZBRM *DEVICE command to clear the BRMS/400 device and media library information and initialize the files with the tape devices currently configured on the system. You should restore a current version of your system's user profiles. To do so, run STRRCYBRM OPTION(*SYSTEM) ACTION(*RESTORE) OMITLIB(*DELETE) using media shown in this example. Press F9 (Recovery defaults) on the Select Recovery Items display. Ensure the tape device name that you are using is correct. If recovering to a different system, you must specify *ALL on the Allow object differences (ALWOBJDIF) parameter and *NONE on the System resource management (SRM) parameter. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) *SAVSECDTA *FULL 01 6/04/00 20:35:20 43 11 RHSAVSYS ABC011 If you are recovering to a different system and your security level is 30 or greater, *ALLOBJ special authority has been removed from all user profiles, except certain IBM-supplied profiles. Use the CHGUSRPRF command to give *ALLOBJ authority to user profiles who need it. Note: You should use the CHGUSRPRF command to grant *ALLOBJ authority to user profiles after the recovery is complete as detailed in Backup and Recovery - Basic, SC41-4304. You should also review the implications of setting the Allow object differences parameter (ALWOBJDIF) to *ALL in Backup and Recovery - Basic, SC41-4304. You should only use *ALL when you perform a full system recovery and there is no data on the system. Specifying ALWOBJDIF(*ALL) when you recover to a different system allows the restored data to be automatically linked to the authorization lists associated with the object. You must restore specific system libraries before you can use BRMS/400 to perform other recovery operations and tape automation. These libraries are QGPL, QUSRSYS, and QSYS2. QUSRSYS contains the tape exit registration information and QSYS2 contains the LAN code for the 3494 media library. The QGPL library must be restored prior to the QUSRSYS library because there are dependencies in QGPL that QUSRSYS needs. Step 6 is new a step beginning with V3R6 and V3R2. In V3R1, this step is equivalent to step 5. STEP 6: Recover BRMS/400 required system libraries. You must restore specific system libraries before you can use BRMS/400 to perform other recovery operations. To do so, run STRRCYBRM OPTION(*SYSTEM ACTION(*RESTORE) OMITLIB(*DELETE) using media shown below. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) QGPL *FULL 01 6/04/00 20:51:11 120 18 RHBRMS ABC017 QUSRSYS *FULL 01 6/04/00 20:51:55 1,045 19 RHBRMS ABC017 QSYS2 *FULL 01 6/04/00 20:53:12 104 20 RHBRMS ABC017 You are now ready to restore your configuration data. When you restore the configuration data, you should use F9 to see the restore command defaults from the Select Recovery Items display. If you are restoring configuration data on the same system that you had saved from, you should leave the System resource Chapter 10. Recovery using BRMS/400 201 management (SRM) parameter set to *ALL. However, if you are restoring on a different system, you should change the parameter to *NONE. If you have restored the SRM database, and the hardware configuration does not match, to correct the errors, you must refer to the “Correcting Problems with the System Resource Management Database” chapter in Backup and Recovery - Basic, SC41-4304. For RISC AS/400 systems, only token-ring descriptions are found in the SRM database. The SRM database has been incorporated into Hardware Resource Manager (HRM). You should not see the same problems with a corrupted SRM database as with the CISC AS/400 systems. STEP 7: Recover configuration data. You should restore a current version of your system configuration. To do so, run STRRCYBRM OPTION(*SYSTEM) ACTION(*RESTORE) OMITLIB(*DELETE) using media shown below. Use the INZBRM *DEVICE command to clear the BRMS/400 device and media library information and initialize the files with the tape devices currently configured on the system after you have restored the configuration. Item Type ASP Date Time Objects Seq Group Volume(s) *SAVCFG *FULL 01 6/04/00 20:35:20 59 12 RHSAVSYS ABC011 For a LAN-attached 3494 Automated Tape Library Data Server, you must vary on the LAN line description. To vary on the LAN description, use the command: WRKCFGSTS *LIN If your 3494 is attached through an RS232 connection, you do not need to vary on the RS232 line description. Use the following command to clear the BRMS/400 device and media library information and initialize the files with the tape devices currently configured on the system: INZBRM OPTION(*DEVICE) Verify your BRMS/400 device and media library information for the correct settings (for example, next volume message, densities, device location, shared devices, and so on). Some of the information is reset to the default values by using the INZBRM OPTION(*DEVICE) command after you restore the configuration. 10.2.5 Completing the recovery After the previous step, for CISC machines, the restricted state portion of the recovery is complete so you can return to random/library mode. If you chose not to go to random mode for RISC systems at step 3, you may do so now. If you used the 3494 in stand-alone mode and there is another cartridge required, the cartridge that is still inside the drive from the stand-alone mode is demounted, and the required cartridge is mounted. On the Library Manager, you see the demount complete display shown as shown in Figure 116 on page 162. You do not have to reset stand-alone mode on the Library Manager; this is done automatically for you. STEP 8: Recover IBM product libraries. You should restore the current version of IBM product libraries on your system. To do so, run STRRCYBRM OPTION(*IBM) ACTION(*RESTORE) OMITLIB(*DELETE) using the media shown here. Press F9 (Recovery defaults) on the Select Recovery Items display. Ensure the tape device name that you are using is correct. Saved Save Save File Control 202 Backup Recovery and Media Services for OS/400 Item Type ASP Date Time Objects Seq Group Volume(s) #CGULIB *FULL 01 6/04/00 19:45:02 4 3 RHBKUGRP ABC454 #COBLIB *FULL 01 6/04/00 19:45:02 86 4 RHBKUGRP ABC454 #DFULIB *FULL 01 6/04/00 19:45:02 6 5 RHBKUGRP ABC454 #DSULIB *FULL 01 6/04/00 19:45:02 7 6 RHBKUGRP ABC454 #RPGLIB *FULL 01 6/04/00 19:45:02 58 7 RHBKUGRP ABC454 #SDALIB *FULL 01 6/04/00 19:45:02 4 8 RHBKUGRP ABC454 #SEULIB *FULL 01 6/04/00 19:45:02 6 9 RHBKUGRP ABC454 QADM *FULL 01 6/04/00 19:45:02 178 10 RHBKUGRP ABC454 QADMDISTP *FULL 01 6/04/00 19:45:02 15 11 RHBKUGRP ABC454 QAFP *FULL 01 6/04/00 19:45:02 132 12 RHBKUGRP ABC454 QAFPLIB *FULL 01 6/04/00 19:45:02 2 13 RHBKUGRP ABC454 QBBCSRCH *FULL 01 6/04/00 19:45:02 247 14 RHBKUGRP ABC454 QBGU *FULL 01 6/04/00 19:45:02 84 15 RHBKUGRP ABC454 QCBL *FULL 01 6/04/00 19:45:02 75 17 RHBKUGRP ABC454 In the preceding step, we showed you only a few libraries to restore to give you an idea. In reality, the list is longer than shown in the preceding report. Your BRMS/400 System Recovery Report lists all of the IBM libraries that are required to be restored. Before the recovery, the display in Figure 139 shows where you can select which libraries to recover, or you can press F16 on the Select Recovery Items display to select all of the libraries. Unless you are absolutely sure of the IBM product libraries that you want to omit, and if you are concerned about the recovery time window, we recommend that you select all of the IBM product libraries. Figure 139. Selecting Recovery Items You can use the F9 function key to change the recovery defaults as shown in Figure 140. You can set the recovery defaults to specify: • The tape drive you are using on the device parameter. • *ALL for the allow object difference parameter if recovering on a different system. • *ALL for the system resource management parameter if you are recovering on your system. You should use *NONE if you are recovering on a different system. Select Recovery Items SYSTEMA Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display 7=Specify object Saved Save Volume File Expiration Objects Opt Item Date Time Type Serial Seq Date Saved #CGULIB 6/04/00 19:45:02 *FULL ABC454 3 2/23/01 4 #COBLIB 6/04/00 19:45:02 *FULL ABC454 4 2/23/01 86 #DFULIB 6/04/00 19:45:02 *FULL ABC454 5 7/09/00 6 #DSULIB 6/04/00 19:45:02 *FULL ABC454 6 7/09/00 7 #RPGLIB 6/04/00 19:45:02 *FULL ABC454 7 7/09/00 58 #SDALIB 6/04/00 19:45:02 *FULL ABC454 8 7/09/00 4 #SEULIB 6/04/00 19:45:02 *FULL ABC454 9 7/09/00 6 QADM 6/04/00 19:45:02 *FULL ABC454 10 7/09/00 178 QADMDISTP 6/04/00 19:45:02 *FULL ABC454 11 7/09/00 15 Chapter 10. Recovery using BRMS/400 203 Figure 140. Selecting Recovery Items If the BRMS/400 recovery ends in error or is cancelled using F12, you can restart the procedure using the STRRCYBRM *RESUME command. The Select Recovery Items display is set at the library that has to be restored next. Note: In V3R1, any changes to recovery defaults do not remain in force if the recovery ends in error or has been cancelled using F12. Beginning with V3R6 and V3R2, the changes in recovery defaults remain until the user signs off the system. The next step is to recover user libraries. Depending on how you saved the libraries, you can choose the STRRCYBRM OPTION (*ALLUSR) or STRRCYBRM OPTION(*CTLGRP) commands. The latter gives you more control and allows you to start concurrent restores. In a full restore, BRMS/400 restores full and incremental saves. You may want to avoid unnecessarily restoring multiple times by using control groups. You should also give consideration to unnecessarily rebuilding access paths when restoring your data. The following step only shows you a few libraries for reference. The Recovering Your Entire System report lists all of the user libraries that need to be restored. STEP 9: Recover user libraries. You should restore the current version of your libraries. To do so, run STRRCYBRM OPTION(*ALLUSR) ACTION(*RESTORE) OMITLIB(*DELETE) using the media shown here. Depending on your recovery strategy, you may choose to use the STRRCYBRM OPTION(*CTLGRP) ACTION(*RESTORE) OMITLIB(*DELETE) command to restore individual control groups. ATTENTION - If you have logical files whose based-on file is in a different library, you must restore all based-on files before you can restore the logical file. If you use journaling, the libraries containing the journals must be Select Recovery Items SYSTEMA Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display 7=Specify object .............................................................................. : : : Restore Command Defaults : : : : Type information, press Enter. : : Device . . . . . . . . . . . . . . TAP02 Name, *MEDCLS : : End of tape option . . . . . . . . *UNLOAD *REWIND, *LEAVE, *UNLOAD : : Option . . . . . . . . . . . . . . *ALL *ALL, *NEW, *OLD, *FREE : : Data base member option . . . . . . *ALL *MATCH, *ALL, *NEW, *OLD : : Allow object differences . . . . . *ALL *NONE, *ALL : : Document name generation . . . . . *SAME *SAME, *NEW : : Restore to library . . . . . . . . *SAVLIB Name, *SAVLIB : : Auxiliary storage pool ID . . . . . *SAVASP 1-16, *SAVASP : : System resource management . . . . *NONE *ALL, *NONE, *HDW, *TRA : If you used option 21 from the SAVE menu to save your entire system, you should use option 21 from the RESTORE menu even if you have BRMS/400 installed. You should not mix native save and restore menu options with BRMS/400 save and resotore process. Important 204 Backup Recovery and Media Services for OS/400 restored before restoring the journaled files. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) #LIBRARY *FULL 01 6/04/00 20:17:47 5 113 RHBKUGRP ABC454 ASN *FULL 01 6/04/00 20:17:47 8 114 RHBKUGRP ABC454 BRMSTST1 *FULL 01 6/04/00 20:17:47 119 115 RHBKUGRP ABC454 BRMSTST2 *FULL 01 6/04/00 20:17:47 1 116 RHBKUGRP ABC454 BRMSTST3 *FULL 01 6/04/00 20:17:47 11 117 RHBKUGRP ABC454 QDSNX *FULL 01 6/04/00 20:17:47 3 118 RHBKUGRP ABC454 QPFRDATA *FULL 01 6/04/00 20:17:47 3 119 RHBKUGRP ABC454 QS36F *FULL 01 6/04/00 20:17:47 1 120 RHBKUGRP ABC454 QUSRIJS *FULL 01 6/04/00 20:17:47 62 121 RHBKUGRP ABC454 QUSRINFSKR *FULL 01 6/04/00 20:17:47 1 122 RHBKUGRP ABC454 RHAHN *FULL 01 6/04/00 20:17:47 7 123 RHBKUGRP ABC454 SPLMCHRM *FULL 01 6/04/00 20:17:47 116 124 RHBKUGRP ABC454 STEP 10: Recover document library. You should restore the current version of your documents, folders, and mail. To do so, run STRRCYBRM OPTION(*ALLDLO) ACTION(*RESTORE) using the media shown here. Before you begin, use the Backup and Recovery - Basic book to determine if Document Library Objects need to be reclaimed. To do so, run RCLDLO DLO(*ALL). Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) *ALLDLO *FULL 01 6/04/00 20:54:16 3,569 21 RHSAVSYS ABC011 Step 11 is new beginning with V3R2 and V3R7. You can run the STRRCYBRM command with the *LNKLIST value to restore the integrated file system (IFS) objects. For V3R6, the IFS objects are restored when you restore the LINKLIST item listed under your user libraries in the backup control group. When you restore this control group, your IFS objects are restored. For V3R1, you have to remember to restore IFS objects manually. In V3R1, there is no special *LINK value or *LNK type, and you cannot use LINKLIST in the STRRCYBRM command. You must use an exit (*EXIT) within the control group that performs a save operation (SAV) of the integrated file system to a save file. Once you recover the library containing the save file, you can use the Restore Object (RST) command outside of BRMS/400 to restore the integrated file system objects. See 6.6, “Saving and restoring V3R1 IFS data with BRMS/400” on page 146, for additional information. STEP 11: Recover objects in directories. Run STRRCYBRM OPTION(*LNKLIST) ACTION(*RESTORE) using the media shown here. Saved Save Save File Control Item Type ASP Date Time Objects Seq Group Volume(s) LINKLIST *FULL 01 6/05/00 8:26:51 4,072 1 RHBKUGRP ABC594 Step 12 is also new beginning with V3R2 and V3R7. Although the save spooled file support within BRMS/400 is supported since V3R1, the recovery report for V3R1 and V3R6 does not guide you through the actual recovery of the spooled files. With V3R1 and V3R6, you have to remember to use the WRKSPLFBRM command to recover your spooled files. STEP 12: Recover Spooled files. If spooled files were saved, restore your spooled files using the WRKSPLFBRM command. STEP 13: Apply journal changes. To determine if you need to apply journal changes, refer to Task 2 - Determining Whether You Need to Apply Journaled Changes under the chapter of Restoring Changed Objects and Applying Journaled Changes as detailed in the Backup and Recovery - Basic book. STEP 14: Recover authorizations. Chapter 10. Recovery using BRMS/400 205 You should recover private authorizations if user profiles were recovered in an earlier step. To do so, end all subsystems using ENDSBS SBS(*ALL) OPTION(*IMMED) and then run RSTAUT USRPRF(*ALL). This operation requires a dedicated system and can be long running. After the recovery has completed, you should check the job log to ensure all objects were restored and that all authorities were correctly recovered. The job log contains information about the restore operation. Print the job log and any other remaining spooled output. To print the job log, use the SIGNOFF *LIST command or the DSPJOBLOG * *PRINT command. The CPC3703 message is sent to the job log for each library that was successfully restored. The CPF3773 message is sent to tell you the number of objects that were restored. Sometimes objects may not be restored for various reasons. You need to identify these objects and take appropriate action to recover these objects. You must check for all error messages, correct the errors, and restore any missing objects from the media. STEP 15: IPL Return system to normal mode and IPL using PWRDWNSYS OPTION(*IMMED) RESTART(*YES). 10.3 Recovering specific objects If the system is within a network and you want to restore a single library from another system of the network, you have to find the tape that contains the object. You can use BRMS/400 to search for the object in several ways: • Use the WRKMEDBRM command to list the volumes and select option 13 to list the contents of the volume. If you saved the object details, you can use option 9 to list the objects. • Search for the library using the WRKMEDIBRM command and use option 9 to list the objects. • Use the WRKOBJBRM command to list objects directly. Figure 141 on page 206 shows the parameters for the WRKMEDIBRM command. 206 Backup Recovery and Media Services for OS/400 Figure 141. Work with Media Information (WRKMEDIBRM) You have to press Enter to see these last two entries. The default is *LCL to search the local system. However, you may enter the location and network identification of another system in the network to work with that system. If the entry in the Receive media info parameter of the system group is set to *LIB, all library entries from the history of the remote system are on your system. Otherwise, a DDM link is activated and you receive the information from the remote system. The values in the FROMSYS parameter are ignored if you specify a volume identifier in the VOL parameter. In this case, the values associated with the volume are used. When you press Enter, the Work with Media Information display is shown (Figure 142). Select option 7 to select the library to restore. Figure 142. Work with Media Information display Since there may be more than one entry, make sure you choose the entry with the date and time that correspond to the save from which you want to recover. When you press Enter, a confirmation display is shown (Figure 143). Work with Media Information (WRKMEDIBRM) Type choices, press Enter. Library . . . . . . . . . . . . > RHAHN* Name, generic*, *ALL... Volume . . . . . . . . . . . . . *ALL Character value, *ALL Auxiliary storage pool ID . . . *ALL 1-16, *ALL Control group . . . . . . . . . *ALL Name, *ALL, *BKUGRP... Save type . . . . . . . . . . . *ALL *ALL, *FULL, *CUML, *INCR... + for more values Select dates: From date . . . . . . . . . . *BEGIN Date, *CURRENT, *BEGIN, nnnnn To date . . . . . . . . . . . *END Date, *CURRENT, *END, nnnnn Save status . . . . . . . . . . *ALL *ALL, *NOERROR, *ERROR Sequence option . . . . . . . . *DATE *DATE, *LIB, *VOL Entries to be displayed first . *LAST *LAST, *FIRST From system . . . . . . . . . . SYSTEM02 Output . . . . . . . . . . . . . * *, *PRINT Work with Media Information SYSTEM02 Position to Date . . . . . Type options, press Enter. 2=Change 4=Remove 5=Display 6=Work with media 7=Restore 9=Work with saved objects Saved Save Volume File Expiration Opt Item Date Time Type Serial Seq Date RHAHN 6/10/00 17:51:58 *FULL RH0002 1 *VER 002 7 RHAHN2 6/10/00 17:52:17 *FULL RH0002 2 *VER 002 Chapter 10. Recovery using BRMS/400 207 Figure 143. Select Recovery Items display Use F9 to change the recovery default options and F14 to submit to batch if required. This procedure can be used between systems with different releases and also between systems with CISC and RISC OS/400 releases. If you restore to a previous release, you have to select your save with the Target release parameter, specifying the release to which you are going. This can be selected in the backup control group or in the save commands such as SAVLIBBRM or SAVOBJBRM. At the completion of the recovery, you receive a message that tells you how many objects have been restored. Use the following command to look at the recovery activity: DSPLOGBRM *RCY If you have not saved object detail, you cannot use BRMS/400 to search for the object. However, you can still restore it if you know its name. At this point, if you replace option 1 with option 7 (Specify object), you are prompted with the Restore Object Display (Figure 144). Figure 144. Recovering individual objects 10.3.1 Recovering individual user profiles Beginning with V3R7, you can recover individual user profiles in a similar manner to recovering objects. Select Recovery Items SYSTEM02 Type options, press Enter. Press F16 to select all. 1=Select 4=Remove 5=Display 7=Specify object Saved Save Volume File Expiration Objects Opt Item Date Time Type Serial Seq Date Saved 1 RHAHN2 6/10/00 17:52:17 *FULL RH0002 2 *VER 002 8 Restore Object (RSTOBJ) Type choices, press Enter. Objects . . . . . . . . . . . . Name, generic*, *ALL + for more values Saved library . . . . . . . . . > QUSRBRM Name Device . . . . . . . . . . . . . > TAP01 Name, *SAVF + for more values Object types . . . . . . . . . . > *ALL *ALL, *ALRTBL, *BNDDIR... + for more values Volume identifier . . . . . . . > RH0002 Character value, *MOUNTED.. Sequence number . . . . . . . . > 0003 1-9999, *SEARCH Label . . . . . . . . . . . . . *SAVLIB End of tape option . . . . . . . > *REWIND *REWIND, *LEAVE, *UNLOAD Option . . . . . . . . . . . . . > *ALL *ALL, *NEW, *OLD, *FREE 208 Backup Recovery and Media Services for OS/400 In earlier versions, when you save user profiles using *SAVSECDTA in a backup control group, you can select Retain Object Detail. If you run the WRKMEDIBRM command and select option 9 on the *SAVSECDTA line, a display is shown with all of the user profiles listed. However, if you select option 7 (Restore on a single user profile), you receive the error message BRM1659 - Restoring security data not valid. The only way to restore a single user profile is to use the native OS/400 RSTUSRPRF command. 10.4 Restoring the integrated file system Beginning with V3R6 and V3R2, BRMS/400 has been enhanced to support save and restore of the integrated file system. The integrated file system information is saved from BRMS/400 by using a control group to perform the save. There are no new BRMS/400 commands to save and restore the integrated file system information. The new BRMS/400 function is implemented through a backup item called LINKLIST and a list type of *LNK. Beginning with V3R7, there is a new special value called *LINK. See 6.5, “Restoring IFS directories with BRMS/400” on page 142, for more details on the integrated file system. For V3R1, you can only save the integrated file system using the SAV command in a *EXIT in a control group. BRMS/400 recovers the library where you processed the SAV command. However, you must use RST outside of BRMS/400 to recover the integrated file system. See 6.6, “Saving and restoring V3R1 IFS data with BRMS/400” on page 146, for more information. © Copyright IBM Corp. 1997, 2001 209 Chapter 11. Planning for upgrades to PowerPC AS This chapter discusses the issues and considerations that you need to investigate when you have the BRMS/400 licensed program installed and you are planning to upgrade your IMPI processor to a PowerPC AS processor (CISC to RISC). At all times, you must follow the instructions documented in the AS/400 Road Map for Changing to PowerPC Technology, SA41-4150, for all your upgrade methods. The latest edition contains additional updates on the steps that are related to BRMS/400. We provide this list these for planning purposes only: • Ensure that you have the following books available: – AS/400 Road Map for Changing to PowerPC Technology, SA41-4150 – Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) – Automated Tape Library Planning and Management, SC41-5309 • BRMS/400 stores information that depends on your system name (SYSNAME network attribute). If your target system has a different system name than your source system, order and read Informational APAR II09475. • Informational APAR II09772 is an index to all informational APARs about BRMS/400. You should check this index for new information periodically during your upgrade process. • PowerPC AS releases handle media libraries differently than IMPI releases of OS/400. If you have media libraries, you should order Automated Tape Library Planning and Management, SC41-5309, for your target release before you begin the upgrade process. This book describes the differences in how the system handles media libraries. You can use it to help you plan the changes you might need to make after you upgrade. If you have access to the Internet, use the following procedure to find current information about upgrading BRMS/400 and other IBM products: 1. Access the AS/400 service home page: http://as400service.rochester.ibm.com 2. From the Service page, select AS/400 Authorized Program Analysis Reports (APAR). 3. From the APAR page, select all Informational APARs. 4. Use the search function to find Informational APARs about BRMS/400 (or other licensed programs that you have installed). 11.1 Preparing BRMS/400 on your source system Complete the following steps before you begin the upgrade procedure: 1. To create a printed record of your BRMS/400 device information, follow these steps: a. Type the WRKDEVBRM command and press the Enter key. You see the Work with Device Information display with a list of the devices that BRMS/400 uses on your system. b. Press the Print key. 210 Backup Recovery and Media Services for OS/400 c. In the Opt column next to the first device, type 5 (Display). You see the Display Device Information display. d. Press the Print key. e. Page down to display additional information about the device. f. Press the Print key again. g. Press F12 (Cancel). You return to the Work with Device Information display. h. If you have another BRMS/400 device, repeat steps c through g for the next device. i. Retrieve your printout from the printer. Save it for use at the end of your upgrade process. 2. If you are using the side-by-side upgrade method, continue with step 4. 3. If your source system is part of a BRMS/400 network (sharing a media device with other systems, for example), you need to remove your system from the BRMS/400 network before you start the upgrade process. Complete the following steps: a. Locate your copy of Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). b. To ensure that the other systems in the BRMS/400 network have current information from your system, type the following command: DSPPFM QUSRBRM/QA1ANET Press the Enter key. You see the Display Physical File Member display. c. If you see the Selected member contains no record message, continue with step e. If other systems in your BRMS/400 network are listed in the file member, you need to establish communications with those systems. d. Wait until communications is established. Then return to step a. e. Follow the instructions in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) to remove your source system from your BRMS/400 network. Specify *NO for the Remove media records parameter. 4. To ensure that your source system has a critical PTF installed, use the Display PTF (DSPPTF) command. If your source system is running V2R3M0 or V3R0M5, type the command: DSPPTF 5798RYT If your source system is running V3R1 or a later release, type: DSPPTF 57nnBR1 Replace nn with the appropriate number for your release. 5. On the Display PTF Status display, look for the appropriate PTF for your release: V2R3M0 - SF26727 V3R0M5 - SF26727 V3R1M0 - SF35187 V3R2M0 - SF35188 Chapter 11. Planning for upgrades to PowerPC AS 211 6. If you do not have the appropriate PTF applied, order it and apply it. You must apply the PTF for your release before you begin the upgrade process. 11.2 BRMS considerations for saving user information As part of the upgrade process, we strongly recommend that you use the Enhanced Upgrade Assistant tool to perform your save, prior to upgrading to your AS/400 system using PowerPC technology. After the upgrade is successful, you can continue using BRMS/400 for your normal operations. Before you begin your save using the Enhanced Upgrade Assistant tool, you must complete the following tasks: 1. Sign on the system console or a workstation that is assigned to the controlling subsystem. Sign on with a user profile that has all the authorities that Enhanced Upgrade Assistant requires, such as QSECOFR. This ensures that you have the authority that you need to place the system in the necessary state and to save everything. 2. Make sure that Client Access is not active at your workstation. 3. If you plan to run the save procedure immediately, make sure that no jobs are running on the system. Use the WRKACTJOB command. If you plan to schedule the save procedure, send a message to all the users that informs them when the system will be unavailable. 4. If you use the LAN server, QNetWare, or Lotus Notes licensed programs, you must vary off the network server descriptions before you begin the save procedure. 5. If you are using MQSeries (5763-MQ1 or 5763-MQ2), you need to quiesce MQSeries for OS/400 before you save the system. 6. Mount the first tape. • Do not use BRMS/400 to perform this save operation. • Perform the following steps to disable BRMS/400 from this system: 1. Type: WRKPCYBRM *SYS 2. Select option 1 (Change System Policy). 3. Change the Media Monitor parameter to *NO. • Do not use tapes that are enrolled in BRMS, contain active data, or are in a media library device. Ensure the tape volumes you use are scratch volumes, and label your tapes correctly. Note: If you are using a shared media inventory environment with BRMS/400, the tapes you use to perform the save for the upgrade are not protected from being overwritten while the BRMS/400 media monitor is turned off. • This save operation does not affect the information that BRMS/400 stores for managing the process of saving changed objects. BRMS considerations 212 Backup Recovery and Media Services for OS/400 7. If you are using a 3494, 9427, 3570, or 3590 media library device for the save, the media library device cannot be in random or library mode. The media library device must be in stand-alone, automatic, sequential, or manual mode. Refer to the Operator's Guide for your media library device for instructions on setting the correct mode. 8. After the save has completed successfully, you need to return to the original configuration for BRMS/400 and media library devices. a. Perform the following tasks to enable BRMS/400 after the save: i. Type: WRKPCYBRM *SYS ii. Select option 1 (Change System Policy). iii. Change the Media monitor parameter to *YES. b. Return the media library device back to random or library mode if you had changed it prior to the save operation. 11.3 Preparing BRMS/400 on your target system To prepare BRMS/400 to run on your target system, complete the following steps: 1. Check the PSP document for your target release to determine whether you need to order and install any critical (HIPER) PTFs that affect BRMS/400 or automated tape libraries. 2. If you have not already updated license information for the BRMS/400 licensed program, complete the following steps: a. Type WRKLICINF and press the Enter key. b. On the Work with License Information display, locate product 5716BR1. c. In the option column next to 5716BR1, type 2 (Change) and press the Enter key. You see the prompt display for the Change License Information (CHGLICINF) command. d. For the Usage limit parameter, specify the value from your BRMS/400 license agreement. Press the Enter key. You see the CPA9E1B message Usage limit increase must be authorized. e. To respond to the message, type G and press the Enter key. 3. If you have media library devices, complete the following steps: a. Locate your printout of your BRMS/400 device information, which you printed in 11.1, “Preparing BRMS/400 on your source system” on page 209. b. To re-initialize your media library devices on your system (because of the differences in how they are handled on PowerPC AS compared to IMPI), type the following command: INZBRM *DEVICE Note: If your system has many devices, this command might run for a long time. c. To display the locations for media library devices, type the following command and press the Enter key: WRKMLBBRM Chapter 11. Planning for upgrades to PowerPC AS 213 d. On the Work with Media Libraries display, check the values in the location column. Compare the information on this display to your printout from your source system and make changes if necessary. The location here needs to match the location on the WRKDEVBRM command and the WRKMEDBRM command. If necessary, use option 2 (Change) to make changes. e. Press F12 (Cancel). f. To update device information, type the following command and press the Enter key: WRKDEVBRM g. On the Work with Device Information display, type 2 (Change) in the Opt column next to the first device. You see the Change Device Information display. Review the online information and Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for help with new parameters. The following list contains some guidelines: • For the Auto enroll media parameter, specify *YES if you need to mount tapes manually and you want BRMS/400 to use the device. • For the Shared device parameter, specify *NO if your system is in a BRMS/400 network. Note: This is opposite of what you specified on your IMPI system because the system handles tape libraries different on PowerPC AS releases. • For the Shared device wait parameter, specify a value that is appropriate for your network environment. If you encounter wait problems, increase the value by 30 seconds. h. Review the information in the PowerPC AS version of Automated Tape Library Planning and Management, SC41-5309. It describes the differences in how the IMPI and PowerPC AS releases handle media devices. If you want to arrange your tape libraries as a single location, you can order the following PTFs for your target release: • V3R6M0: SF33834, MF13080, MF12538 • V3R7M0: SF35915, MF13404, MF13405, MF13406, MF13407 On your target system, you need to understand the relationship between media library devices, tape resources, and tape devices. A tape resource represents a physical tape unit on your system. Only one device description (either the media library device or the tape device) can allocate a specific tape resource at any given time. Therefore, to use a tape device in stand-alone mode, you must vary off the media library device that has the tape device allocated. Similarly, you must vary off a tape device before you can allocate the tape resource to a media library device. On PowerPC AS releases, when you vary on a media library device, the system does not automatically vary on all of the associated resources. You must allocate the tape resources from the Work with Media Library Device Status (WRKMLBSTS) display. Notice also that the MLDD subsystem and the associated commands no longer exist on your RISC system. Media library functions are integrated in the Licensed Internal Code and OS/400. If you have CL programs or written procedures that use the IMPI commands, you need to update those programs and procedures to use the new media library commands. 214 Backup Recovery and Media Services for OS/400 i. To review the assignment of media, type the WRKMEDBRM command and press the Enter key. j. Review the location information on the Work with Media display. If media is not assigned correctly, use option 8 (Move) to make corrections. k. Press F12 (Cancel). l. Type WRKCTLGBRM and press the Enter key. m. On the Work with Backup Control Groups display, type 8 (Change attributes) in the Opt column next to the first control group. n. On the Change Backup Control Group Attributes display, check the Backup devices field. If it has a tape device name, correct it to match your new library name. If it is set to *MEDCLS, you do not need to make changes. o. After you make the changes, press Enter. You return to the Work with Backup Control Groups display. p. On the Work with Backup Control Groups display, you can use option 2 (Edit entries) to set up your control groups to use new options. Consult the online information or the BRMS/400 book for more information. q. Repeat steps m through p for any additional backup control groups. r. Repeat steps m through p for archive control groups. Correct the tape library names if necessary. s. To review your system policy, type WRKPCYBRM *SYS and press the Enter key. You see the System Policy menu. t. Select option 1 (Change system policy). u. On the Change System Policy display, change the devices to match the new library names (if necessary). Press the Enter key. v. Use the following command to review and update your move policy (if needed): WRKPCYBRM *MOV w. Use the following command to review and update your media policy if needed (the storage location, in particular): WRKPCYBRM *MED x. If you have any CL programs that use the SAVxxxBRM commands, ensure that the programs specify the media library rather than the tape device. y. If you have a 3494 tape library connected to a LAN, make sure that your PC library manager software is at 511.05 or a later level. 4. To update the maintenance information for BRMS/400, complete the following steps: a. Type STRMNTBRM and press the Enter key. Wait for the system to complete processing. Then continue with the next step. Note: When you run BRMS/400 maintenance, the system writes multiple files to your output queue. b. Type INZBRM *REGMED and press the Enter key. Wait for the system to complete processing. Then continue with the next step. Chapter 11. Planning for upgrades to PowerPC AS 215 c. Type STRMNTBRM and press the Enter key. The system registers your BRMS/400 media on your upgraded system and rebuilds the BRMS/400 media tables. 5. If your target system is part of a BRMS/400 network, follow the instructions in Chapter 5, “BRMS/400 networking” on page 97, to connect your system to the network. Note: If your source system was previously part of the network, make sure that you follow the instructions for copying current media information into a new file. Also, you should create a backup copy of the QUSRBRM library on each system in the network before you begin. 6. If your target system is part of a BRMS/400 network that has both PowerPC AS and IMPI systems, do not use the networking feature that provides sharing of library-level information. If your network has all PowerPC AS systems, you can complete the following steps to share library-level information: a. Type WRKPCYBRM *SYS and press the Enter key. b. On the Work with Policy menu, select option 4 (Change network group). c. On the Change Network Group display, specify *LIB for the Set the receive media information parameter. 11.4 Re-synchronizing BRMS/400 after an upgrade If you are performing your upgrades using the side-by-side upgrade method or the staged upgrade offering where both your source system and the target system are running in parallel, you need to re-synchronize the target system. This ensures that the data that has changed on the source system is duplicated on the target system. Because of the complexities involved during object conversion on PowerPC AS systems, and due to the changes that happen during the installation of the licensed programs, you must carefully plan how to perform this re-synchronization. Your first step is to follow the instructions documented in Chapter 29 in AS/400 Road Map for Changing to PowerPC Technology, SA41-4150. These instructions provide an overall view of the steps that need to be carried out. To re-synchronize the BRMS/400 licensed program, complete the following steps: 1. On your production system, stop all activity that might place locks on objects in the BRMS/400 libraries. If you have scheduled jobs that use BRMS/400, you need to hold them. 2. Mount a tape that is compatible with the tape unit on your test system. 3. Type the following command: SAVLIB LIB(QBRM QUSRBRM) DEV(tape-device) Note: If you want, you can use save files and transfer the libraries electronically. 4. On the test system, complete the following steps: a. Stop all activity that might place locks on objects in the BRMS/400 libraries. If you have scheduled jobs that use BRMS/400, you need to hold them. b. Save BRMS/400 licensed program, so when you reinstall the licensed program, you do not have to go through applying PTFs again: 216 Backup Recovery and Media Services for OS/400 SAVLICPGM LICPGM(5716BR1) DEV(tape-device) c. Delete the version of BRMS/400 that is on your test system. Type the following command: DLTLICPGM LICPGM(5716BR1) d. Mount the tape that you created in step 3. e. To restore the BRMS/400 libraries, type the following command: RSTLIB SAVLIB(QBRM QUSRBRM) DEV(tape-device) f. Mount the tape to restore BRMS/400 licensed program saved in step b. If a save was not done, go to step g. Otherwise, type the following command: RSTLICPGM LICPGM(5716BR1) DEV(tape-device) Go to step p. g. Load the IBM-supplied CD-ROM that contains your licensed programs. h. Type GO LICPGM and press the Enter key. i. From the Work with Licensed Programs menu, select option 11 (Install licensed programs). j. On the Install Licensed Programs display, page down to locate BRMS/400. k. Type 1 (Install) in the Opt column in front of BRMS/400 and press the Enter key. l. Verify the information on the Confirm Install of Licensed Programs display. Then press the Enter key. m. On the Install Options display, type the name of your optical (CD-ROM) device. Then press the Enter key. n. Respond to any messages. o. When the installation process is complete, re-apply any critical BRMS/400 PTFs. p. To set up BRMS/400 again, repeat the procedures in 11.3, “Preparing BRMS/400 on your target system” on page 212. 11.5 Deleting the libraries for the media library device driver If you had the Media Device Driver program (5798-RZH) on your source system, you would need the program to perform your save operations successfully. Therefore, you cannot delete it before your final save. However, the functions that 5798-RZH provided on your source system are included in the operating system on your target system. Therefore, you should delete the 5798-RZH libraries. Type the following two commands: DLTLIB LIB(QMLD) DLTLIB LIB(QUSRMLD) © Copyright IBM Corp. 1997, 2001 217 Chapter 12. Planning for the hierarchical storage management archiving solution This chapter and Chapter 13, “Practical implementation of hierarchical storage management archiving capabilities” on page 261, are taken from the redbook Complementing AS/400 Storage Management using Hierarchical Storage Management APIs, SG24-4450. Some updates have been made to address any BRMS/400 functional enhancements since the publication of the original redbook. The authors of this BRMS/400 redbook acknowledge the ITSO project leaders and the ITSO residents who were responsible for documenting this information. We strongly recommend that you obtain a copy of Complementing AS/400 Storage Management using Hierarchical Storage Management APIs, SG24-4450. This redbook provides additional information regarding hierarchical storage management and contains sample code on how you can modify your applications to dynamically retrieve data that is archived using BRMS/400. This chapter provides a description of how hierarchical storage management is implemented with BRMS/400 archiving (using save with storage freed) and Dynamic Retrieval. It then discusses various application design considerations that you should be aware of to aid in the design and implementation of your hierarchical storage management solution. For information on the type of objects that you may consider for archiving and how to set up BRMS/400 to produce an operational Dynamic Retrieval solution, see Chapter 13, “Practical implementation of hierarchical storage management archiving capabilities” on page 261. 12.1 Archiving considerations This section shows details of the points that you may need to take into account when planning your archive strategy. The discussion is limited to technical considerations that may affect the way in which archiving is performed. For details about the types of data to archive and setting up retention periods, see Chapter 13, “Practical implementation of hierarchical storage management archiving capabilities” on page 261. 12.1.1 How archiving is done by BRMS/400 BRMS/400 uses standard OS/400 save and restore commands for its backup, archive, restore, and retrieve activity. To this end, the actual archiving of an object is achieved using a standard OS/400 save of the object with the Storage parameter set to *FREE. Within this publication, this is known as save with storage freed. Objects selected to be archived are identified by the entire auxiliary storage pool, library, or as lists of objects (known as archive lists). The ASPs, libraries, or lists are then included in a BRMS/400 archive control group. Each control group has parameters that control such things as the amount of time the object must have been inactive to be selected for archive, whether save with storage freed is used, and so on. These parameters are known as the control group attributes. These details may also be set in the Archive Policy, which sets the defaults to use 218 Backup Recovery and Media Services for OS/400 unless they are specifically overridden at the control group level. More details of the BRMS/400 setup are found in 13.4, “Setting up BRMS/400 for archive with Dynamic Retrieval” on page 268. When setting up archive policies for Dynamic Retrieval in BRMS/400, it is important to remember to set the control group entries to allow the objects to be saved with storage freed. This particular parameter can be set as a default value in the Archive Policy. It can also be set in each of the individual control group's attributes. This parameter is explained in more depth later. The BRMS/400 archive control groups used for save with storage freed also defaults to saving the access paths of the file members. This is a performance consideration, and it may be changed by the user. It means that the object takes longer to save (archive) but eliminates the need for a potentially lengthy access path rebuild on restore (retrieve). 12.1.1.1 Why use save with storage freed? The important characteristic of save with storage freed is that the object description is left on the system. This object description consumes very little storage space and acts as a place-holder for the object in the system while indicating that the data portion is on tape. Figure 145 shows the makeup of an AS/400 object and how that object may look after being saved with storage freed. The object description contains only a small amount of data that describes the object, including object name, object type, library name, security information, and so on. This sort of information is found when you process such commands as DSPOBJD, DSPFD, and DSPFFD. It is the data portion that contains all of the real data to be processed (for example, all of the records in a physical file). This data constitutes the majority of the object size. When objects are saved with storage freed, the data portion is deleted from the system after successfully completing the save. The object description is retained, and the details are updated to record the save date, time, and the media where the data is stored. Figure 145. AS/400 objects before and after save with storage freed Figure 145 shows the structure of an AS/400 object (note that the object description is typically much smaller than the data part) and how save with Normal Object After save with storage freed library: LIB1 object: OBJ1 Object description data library: LIB1 Object description object: OBJ1 Chapter 12. Planning for the hierarchical storage management archiving solution 219 storage freed affects an object. Even after save with storage freed, the object description remains in the original library for reference. When you access the object, the system searches the job's library list or the library name referenced by the object. If the system finds that the data portion of the object is missing (object was saved with storage freed), BRMS/400 proceeds to check its inventory of archived objects to see whether it has indeed been archived by BRMS/400. If the object is found in the BRMS/400 archive inventory, the retrieve of that object can be invoked by BRMS/400. If the object that has been saved with storage freed is not one of the supported object types for the Dynamic Retrieval function, BRMS/400 is aware that the object has been archived and can assist an operator in locating the correct volume. However, the automatic initiation of a retrieve operation is not possible. The user or job that was attempting to access the unsupported object receives a standard OS/400 error condition (CPF4102) indicating that the object has been saved with storage free. The user or operator must consult the BRMS/400 archive inventory to locate the object and manually initiate its retrieval. This can also be done from BRMS/400 using the Work with Saved Objects (WRKOBJBRM) display or by using the Restore Object using BRM (RSTOBJBRM) command. You can modify an existing application to support certain objects that are not supported for Dynamic Retrieval by BRMS/400. The additional application code needs to manage the types of OS/400 error messages returned for the objects required and to interrogate the BRMS/400 inventory and initiate a BRMS/400 restore operation. 12.1.1.2 What happens when archiving without storage freed The save with storage freed solution is simple in its design and execution. The only alternative available to save with storage freed is to delete the object entirely. However, you should consider the following scenarios. Library list search Suppose you are running two similar environments for the same application. One is a development environment for testing new code releases, and the other is the live production environment with which you are running your business. For ease of migration and copying live data to your test environment, you may well decide to keep exactly the same file names for each environment but store them in different libraries. This way, a simple change of the library name order in your library lists causes a transition. If an object has been saved with storage freed by any means other than BRMS/400, there is no inventory of archived objects to consult. In this case, BRMS/400 cannot locate and restore the object without manual intervention. The user receives the standard OS/400 CPF4102 message File in library with member not found. An example of this situation may be issuing a native OS/400 SAVOBJ command using the STG(*FREE) parameter. Dynamic Retrieval is not possible for this object through BRMS/400. Note 220 Backup Recovery and Media Services for OS/400 Suppose you have been running at a particular release for a while and suddenly some of your production environment file members begin to be archived (with an entire delete, no save with storage freed). Now when you come to access a particular production environment file member that has been archived, you cannot find the file member at all in the production library. The search through your library list continues until you now find a file member of the same name in the test library. The file member you are now opening to access is not the one you need. It contains test data, which may be vastly different from your production data. The point of most concern is that you are not aware that this has happened. Private authorities When an object is deleted from the system and restored at a later date, the private authorities that are assigned to it are lost until a restore authority (RSTAUT) is run. Consider also that RSTAUT can only be run in a restricted state, and even then, it can only be run after a restore of user profiles has been executed. Add this to the fact that you have to run RSTAUT for all user profiles on the system because you do not know which users had private authorities to that object. It quickly becomes clear that an ad hoc restore of an archived object that was deleted entirely from the system is not as simple as the storage freed implementation. Restore performance When restoring an object that has been saved with storage freed, OS/400 has less work to do because it does not need to completely build a new object for the restore of an object that has been deleted. As such, Dynamic Retrieval performs better in this case. This is more beneficial for smaller files because the “create” part of the restore is a larger percentage of the entire process. For these reasons, and there are possibly others, it is considered impractical to use a solution that deletes the object entirely. The save with storage freed solution appears far more integral, secure, and simple. When the object is saved with the STG(FREE) option, the object headers cannot be saved again using the standard OS/400-supplied save commands as the system overlays new tape volume information. You receive the CPF3243 message Member xxxxx already saved with storage freed in your job log. This is expected since you do not want to overwrite the volume information in the object header, which provides the important link to where your data is. One of the advantages of using BRMS/400 is that it uses the Media Storage Extension (MSE) to save the object headers. You can, therefore, migrate your data to another system. Without the MSE feature, you cannot transfer the object header information to another system. You have to first restore all your archived data, save the objects on SYSTEMA, restore on SYSTEMB, and then re-archive the objects. Important Chapter 12. Planning for the hierarchical storage management archiving solution 221 12.1.2 The BRMS/400 double save for archiving BRMS/400 implements its archiving of objects using a double save technique, which is explained here: 1. Save the objects to tape (no save with storage freed). 2. Update the BRMS/400 inventory to indicate that the objects have been archived. 3. Save the objects to a temporary save file with storage freed. 4. Delete the temporary save file. Why BRMS/400 uses the double save The principle reason for the double save approach is that of data integrity. You must ensure that an object is saved successfully before you update your BRMS/400 inventory. And yet you must ensure that the BRMS/400 inventory update has completed successfully before you delete the object's data portion. You must ensure that if the process fails at any point between the transaction boundaries, it can be recovered. The main limitation here is the difficulty of recovering a save with storage freed operation without significant manual intervention, for example, re-mounting the tape. Nor can you perform the updates to the BRMS/400 inventory before the save operation because BRMS/400 relies on the output file information from the OS/400 save operation to update its inventory. To illustrate the point, consider a single save solution, which may work as shown in the following process (without a double save): 1. Save objects to tape with storage freed: a. Save object 1 to tape. b. Delete object 1 data portion. c. Save object 2 to tape. d. Delete object 2 data portion. e. Save object 3 to tape. f. Delete object 3 data portion. g. ... and so on ... h. Send the completion details to BRMS/400. 2. Update the BRMS/400 inventory: a. Update object 1 archive details. b. Update object 2 archive details. c. Update object 3 archive details. d. ... and so on ... These steps are used for archiving objects while retaining the object description. For archiving where no object description is required to be retained, steps 3 and 4 change to delete the object. We concentrate on the save with storage freed implementation in this book because this is required for Dynamic Retrieval. See 12.1.1.2, “What happens when archiving without storage freed” on page 219, for reasons why BRMS/400 uses save with storage freed. Note 222 Backup Recovery and Media Services for OS/400 3. Commit the archive transaction. Note: Step 1 can be a result of a multiple save operation such as: SAVOBJ OBJ(FILEA FILEB FILEC) LIB(LIB1) DEV(TAP01) OBJTYPE(*FILE) FILEMBR((FILE1 (MBR1A MBR1B)) (FILE2 (MBR2C)) (FILEC)) STG(*FREE) Now consider the various failure points in this cycle: • Any failure during the save with storage free operation (step 1) leaves the system with some objects saved to tape and their data portions deleted, but BRMS/400 does not know anything about this. The implication here is that the object data portion has effectively been “lost” by BRMS/400 for all of the objects processed so far. • Any failure during the BRMS/400 update operation (Step 2) implies that BRMS/400 has “lost” all of the object data portions that have not yet been updated. In both cases, you can only recover the operation if you can identify which tapes the storage freed objects were saved to, and then do a manual restore of these objects. When the double save is implemented, you do not delete the data portion of the objects until you have completed the BRMS/400 inventory update. While this is a much more satisfactory solution, there are some considerations that must be taken into account when using archive with the save with storage freed option. Consider these points due to the double save: • Performance: If archiving becomes an extensive part of the system management processes, the double save impacts the performance that you should expect to see from archiving. Effectively, the times involved can be doubled. Typically though, the objects being archived are not in use, so this does not impact the availability of an application or the system. • Disk space: It is possible that certain archive operations (for example, a group of large file members) can create a large temporary save file. The spare capacity of your DASD should be adjusted to compensate for this. The save file is created in the QTEMP library and is deleted should the job end abnormally. However, it still requires some space on disk when the job is active. • Journal entries: If a file is being journaled when it is archived with save with storage freed, there are two save entries in its journal receiver. The second save (to the temporary save file) is actually sent with “update history” *NO. This should not affect your applying journal changes in a recovery situation, but the attentive user may notice these extra save entries in the receiver. If your recovery strategy includes removing journal changes, there are some considerations that apply, which are discussed in 12.11, “Applying journal changes to archived data files” on page 235. • Object locking between saves: To provide data integrity between the two versions of the saves, each object that has undergone the first save is allocated (locked) to prevent updates occurring until the completion of the second save. The implication here is that objects being archived with save with storage freed are locked out for the entire three-stage process (save, update Chapter 12. Planning for the hierarchical storage management archiving solution 223 BRMS/400, save). Under normal conditions, this should not affect an object's accessibility because the objects being archived by definition are not in use since that is why they are being archived. 12.2 Normal-aged file member archiving It is intended that the typical use of the archive and retrieve function revolve around file members that have not been used for a significant amount of time. We refer to these file members as “dormant” file members. You can find more details about the types of files (and the applications that use them) that may apply for Dynamic Retrieval in 13.1.1, “Types of objects to archive for Dynamic Retrieval” on page 261. 12.2.1 Database file members When planning the archiving of data files, you must remember that the support for Dynamic Retrieval is based at the file member level. The main points to consider include: • How critical is the file member data to the successful operation of the business? • What are the legal requirements for retention of the data in the file member? • What is the size of the file member? • How frequently is it accessed? • What is the nature of the application (or applications) that uses this file member? • What is the impact to the application of having to retrieve a file member from tape? • What is the restore time for this file member? • What type of access is required: read, update, or add? • How long is a period of inactivity regarded as sufficient to mark this file member as dormant? • How long should the file member be kept at all (that is, how long to keep the tape copy)? • What security is required for the tape copy of the file member? • What backup (duplication) of the file member tape copy is necessary? Tabulating the answers to these questions is the first step in establishing the optimal system setup and BRMS/400 configuration. The file members may be grouped according to common archive and retrieve characteristics and entered into archive lists within BRMS/400. These archive lists may, themselves, be grouped according to common parameters concerning the method of archiving them and entered into control groups within BRMS/400. The control groups' attributes are tailored, and the groups are scheduled to run on a regular basis. Archiving can be run to produce only an Archive Candidate report or to actually perform the archive itself. Whether you run the report first depends on the items in your archive lists and control groups. More details about BRMS/400 configuration is available in 13.3, “Using BRMS/400 for hierarchical storage management” on page 267. 224 Backup Recovery and Media Services for OS/400 Typical characteristics of database files depend on the application that uses them. A transaction-based application (for example, telephone order processing) can amass records as time passes. These records tend to become dormant at a certain point in time (for example, when the order has been fulfilled and payment received) and, therefore, become historical data for auditing purposes. There may even be a second level of dormancy once an auditing period is over (for example, at the close of the financial year). At each stage, the status of the data changes. A business decision based on knowledge of the volatility of the data at the various stages and access requirements must be made as to when the data becomes dormant. This may be easier to perform at a record level. Further complications arise when records at different stages in their application life share the same file member. Archiving must be performed at the file member level. Further considerations connected with record level versus member level archiving are available in 12.16, “Application design considerations” on page 245. An application with more random access characteristics (for example, an expert system to diagnose medical conditions) demonstrates a much different file access profile than that of a transaction based one. Typically, much of the data may already be collected and arranged (for example, symptom data) before the application was started. There may be a core of data frequently accessed (for example, the symptoms of the more common ailments) and a much larger set of infrequently used data. A large amount of the data may never be updated (that is, many ailments have a stable set of symptoms). There is a great opportunity here to amass an extremely large “dictionary” of information. Many applications have a mixed environment of file usage characteristics. A warehousing application may have a fairly low volatile parts list entity since the parts stored may not change frequently. However, the stock level entity is highly volatile as new stock arrives and items are sent for delivery every day. Section 13.1.1, “Types of objects to archive for Dynamic Retrieval” on page 261, lists and classifies the various file access characteristics in tabular form. It also suggests suitable configurations for archive and retrieval. 12.2.2 Source file members Source file members are archived in exactly the same way as database file members. The differences occur in the questions you may need to ask when establishing the best method of implementation. The key differences are: • Normal business applications do not directly use (open, read, update) source file members. Application development tools (which are applications in their own right) may be using source files for editing or compilation, but the usage patterns differ greatly from database files. • It is less likely that queries are run over source file members. • Access is typically interactive for update and batch for read only. 12.3 Application swapping One use for archiving with Dynamic Retrieval is to move one application off of a system to make space for another. This can be done on a regular basis, swapping Chapter 12. Planning for the hierarchical storage management archiving solution 225 back and forth every month, week, or even day. Typical scenarios for using such a function may include: • Outsourcing: The outsourcing supplier may be able to deliver results to a different customer each week with a monthly cycle while only using one processor for all four customers. • Period end processing: Other applications may be temporarily removed from the system at a period end to allow space for dedicated processing. • Overnight batch: Working day applications can be suspended to allow for the comprehensive overnight batch runs that are needed. Moving applications and their data is a complex operation. Some points to consider before you attempt anything of this scale are: • The movement of data to and from tape consumes a significant amount of time and processor resource. • As the amount of data copied to tape is increased, so is the risk of exposure to data loss through media errors and tape or disk hardware failures. • Capacity planning for a system with constantly changing application portfolios is difficult. • Controlling unwanted access to applications that should be off-line at the time may be difficult. • Although Dynamic Retrieval ensures that no unnecessary data is restored to the system at the application switch-over point, it gives rise to extensive start-up times while waiting for restore operations. • The concept of Dynamic Retrieval is not designed with such heavy use in mind. The queuing of disparate individual ad hoc restore requests results in a much slower restore time compared to a complete uninterrupted sequential restore of the equivalent volume of data. For these reasons, we do not recommend that you attempt to implement such a scenario. 12.4 Logical files It is hoped that most logical files are significantly smaller in size than the physical files over which they are built. In cases where this assumption is true, it is satisfactory to avoid archiving logical files even if the physical files on which they are based have been archived to tape. There are times when the size of a logical file may grow to a significant level compared to that of the physical. In this case, it is desirable to archive the logical file if it becomes dormant. 226 Backup Recovery and Media Services for OS/400 There are some scenarios where it seems of little point to keep a logical file on the system: • The logical file has not been used for so long that it may be regarded as disused (for example, test files that were never deleted or files that supported previous releases). In this case, the file is simply archived by deleting the object description. It is assumed that Dynamic Retrieval is not needed for this file. • The physical file on which the logical file is based has been archived. In this case, it might be appropriate to archive all of the logical files connected with this file. However, consider the following points: – Some of the logical files may be so small that there is little to be gained from archiving them. – Because the physical file may be restored to the system with the Dynamic Retrieval function (and the logical may not), the archive parameters need to be set differently. That is, you have to be very sure that the logical file is not needed for a longer time than for the physical file. – For multiple format logical files (over multiple physical files), it is possible that a logical file may not access all of the physical files to which it is attached for any one request operation. If this is sustained over a length of time, one of the physical files may be archived, but not the others. In this case, you need to keep the logical view online. Note that this does not apply to join logical files. In extreme cases where the migration of logical files is needed, it may be appropriate to archive them by deleting with BRMS/400, but assign a much longer inactivity period to the group of logical files. This ensures that the logical files are indeed dormant for an extended period of time and perhaps even disused, therefore, minimizing the impact of their ineligibility for Dynamic Retrieval support. Saving a logical file with storage freed does not free any storage space. Doing so performs a save of the object. None of the access path information is deleted from the logical file since this is part of the object description; there is no actual data component to be freed. When we refer to archiving logical files in these sections, we mean archiving with total object deletion. The retrieval of the logical file is not an automatic one, although BRMS/400 can be used to assist with the location of the tape and restore operation. Note Chapter 12. Planning for the hierarchical storage management archiving solution 227 12.5 Duplicating your archive tapes The implication of archiving an object is that it is saved and deleted or storage freed in one operation. Therefore, the ability to check whether the save to tape is successful before you delete it is reduced. Despite the extensive error checking and correction routines of modern tape device technology, the only true test of a successful save is to check whether the object can be read successfully in its entirety. Also, it is possible that eventually all other copies of an object that have been saved in the normal backup procedure will expire, leaving only one copy (on tape) of the archived object. This is the most up-to-date copy. The data loss exposure created by data archiving is two-fold: • There is limited verification of a successful save before deletion. • There is eventually only one copy of the object in existence. For these reasons, we recommend that you duplicate immediately tape copies of archived data and move the duplicate copy to an off-site storage location. The following recommended procedure may be followed to address these issues. 12.5.1 Archive tape duplication process Follow these steps for the tape duplication process: 1. Execute the normal regular backup procedure. This may be the daily, weekly, or monthly save. 2. Immediately after the backup has completed, initiate the archive process. Ensure that the archive coincides with the backup of the same frequency. That is, if archiving is done on a weekly basis, start the archiving procedure after the weekly backup. This means that there are at least two copies of the objects on tape, one from the backup and another from the archive in case of a media error. Access through a multiple format or join logical file may cause the retrieval of several physical file members. Each physical file member retrieve operation is performed separately and independently of any other operations. You notice a logical file access path rebuild for each physical file member that is retrieved. Therefore, you may create a situation where a string of access path rebuilds is performed when one final rebuild would have sufficed. You cannot override the access path rebuild to *DELAY because you cannot predict which physical file members (if any) are retrieved unless you are retrieving in *DELAY mode and you are using the Resume Retrieve using BRM (RSMRTVBRM) confirm display. See 12.8, “Retrieval methods” on page 231, for more information. You should be aware of the performance implications of the multiple access path rebuilds that are caused by archiving physical files under either multiple format or join logical files. Performance of multi-format and join logical files 228 Backup Recovery and Media Services for OS/400 It is also important to ensure that for every item on the list of candidates for the archive that is being run, that the inactivity period for qualification is longer than the period since the last guaranteed save of that object. That is, if you know that object A has definitely been saved within a month of today, and it cannot qualify for archiving unless it has been inactive for at least a month, you can be sure that the object has not changed since the last save. This ensures a back-out path in case the object was archived to a tape that subsequently has a media error. It is acceptable to perform the archive while the system is active since objects that happen to be locked at the time of archival are, by definition, not eligible for archive because they are in use. The exceptions to this rule revolve around the definition of “in use”. A file may be allocated but not actually opened, although this has no common practical application. If the inactivity period set within BRMS/400 is zero days, there is a chance that the object can be in use. In this case, these archive jobs should be run at a time when the object is not likely to be in use. 3. Duplicate all of the archive tapes that were just produced. This is normally performed with BRMS/400. You may use option 14 on the Work with Media (WRKMEDBRM) display or use the Duplicate Media using BRM (DUPMEDBRM) command. This procedure also verifies the original archive tape copy since it must read the files successfully to duplicate them. 4. If the duplication fails, you can restore the affected objects from the last backup. You can use BRMS/400 to list the objects on the affected tape with the WRKMEDIBRM VOL(volid) command, where volid is the volume ID of the damaged tape. Option 9 for each library shows the object detail. Write down the object names. You can locate the current backup copy tapes for each object using the WRKOBJBRM OBJ(objname) command and select option 7 to restore the “lost” object. This assumes that you have saved object-level details with your backups. Without this, you need to use the WRKMEDIBRM LIB(libname) command, and type option 7 next to each library with a second option 7 to enable you to type in the object name. When you have recovered the objects lost on the damaged tape, go back to step 2, and run an archive for the objects that were lost from the original “bad” archive tape. 5. Move the duplicate tapes off-site. If an archive tape fails during duplication, it may be quicker to restore a complete library from a backup tape rather than several objects individually from the same library. Be careful in this case to ensure that every object within that library has not been changed since the backup. You may lose important data if this is not the case. Important Chapter 12. Planning for the hierarchical storage management archiving solution 229 12.6 Re-archiving retrieved objects In some cases, a collection of data that has been archived and retrieved may subsequently need to be re-archived differently to data that has not been archived at all. BRMS/400 supports archiving at multiple levels based on the last used date, the last change date, or both (whichever is the later). If you want to enable such a function, you have to create your own program that interfaces with the BRMS/400 retrieve exit point to generate a list of retrieved objects. You may have your own special archive control group that has a different inactivity level specified from the regular control group. The list of retrieved objects is used against a list of candidates objects for this special treatment to generate the list of objects to be included in your special control group. You may also attempt to create a method of differentiating between objects that have been retrieved for read-only purposes and those retrieved for update. Once again, BRMS/400 does not currently differentiate between these conditions. If you are sure that the file member has not been updated, you may consider freeing the storage of the file member after the necessary alternative dormancy period has elapsed. This requires a save to a temporary save file with STG(*FREE) option and deleting the save file. To be sure of this, you need to establish an instant lock on the file member to allow read only transactions or set authority so that no one can update it. This is clearly not a simple task. The last updated date cannot be used because it is changed by the restore operation when you retrieved the file member. Also, because it is only a date (and not a time), there is no way of judging whether an update took place on the same day as the restore. 12.7 Retrieval considerations This section contains details on the technicalities involved in performing an on-demand retrieval and lists some of the considerations that you need to take into account when planning your Dynamic Retrieval solution. You have a parts inventory system and archived the parts list file from last year because your catalog of parts has been refreshed for this year. The old catalog is rarely used again. However, while you are running down your stocks from the old catalog, you may want to keep it online. After a continuous 90 days of inactivity, you can be sure that it is not used and it can be automatically archived. A customer calls and asks for a discontinued part. Your system tells you that it is discontinued. The customer is desperate for this part, and you decide to check last year's catalog to see if any of these really old parts are still lying around. At this point, the old catalog file is retrieved. Having performed the search using the old catalog, you are confident that you probably do not need this catalog unless some really exceptional circumstances occur again. Why wait another 90 days for it to archive? Why not allow this retrieved file to be re-archived after five days of inactivity? Example 230 Backup Recovery and Media Services for OS/400 For details on setting up BRMS/400 to optimize your retrieve operations, see 13.7, “Using BRMS/400 for Dynamic Retrieval” on page 277. 12.7.1 How BRMS/400 does Dynamic Retrieval The Dynamic Retrieval process is explained in the following sequence: 1. An operation on an object is requested: This is any operation that requires the interrogation of an object (see 12.9, “Operations that invoke retrieval” on page 233). 2. A search for an object is performed: The library list for the current job is searched to locate the object description, or the file is located because the request for the file qualifies with the library name. If the object description cannot be found, the process is ended here with an escape message. 3. The object description is found. However, the requestor requires the data portion to be present, and it is not. 4. The retrieve function is invoked: See 12.9, “Operations that invoke retrieval” on page 233, for the types of operations that invoke the retrieve function. This is performed using the optional Media and Storage Extensions (MSE) feature of OS/400. 5. The object type is checked for validity: Only *FILE objects are currently supported for the Dynamic Retrieval function at V3R1 or later. This is also performed using the MSE feature of OS/400. 6. The BRMS/400 archive inventory is checked: MSE passes control to BRMS/400, and a search of the list of archived objects is performed. If the object is not found, the original OS/400 message is sent to the requestor. 7. Requestor is notified: Depending on the retrieve method (see 12.9, “Operations that invoke retrieval” on page 233, for more details) and the type of job running, the user or the system operator message queue is notified of the intention to restore the object. 8. The tape archive copy of the object is located: The BRMS/400 inventory is searched to locate the tape to which the object has been archived. 9. Mounts are issued: If a tape library is present and operational, the tape may be loaded automatically. Otherwise, an operator must respond to a mount message from BRMS/400. 10.Restore takes place: The file member is restored in the normal way under BRMS/400 control. 11.The requestor operation continues as though the object had always been on the system: If the retrieve operation is completed immediately, the requestor may continue business as usual (with only a slight delay). Opening the file in the program is Chapter 12. Planning for the hierarchical storage management archiving solution 231 automatically retried by OS/400. This is done with the aid of the MSE feature. This time, it works as normal. If the retrieve is delayed or submitted in batch, the requestor is notified (effectively failing the operation) and must retry the operation at a later time. 12.8 Retrieval methods There are several different modes of operation for performing a retrieval with BRMS/400. This section describes these modes and their application. The retrieval policy and the Set Retrieve using BRM (SETRTVBRM) command support options for separate batch and interactive controls: • *VERIFY: By default, this value causes a program message to be sent for confirmation before continuing with the restore operation. Responses to the message allow the user to cancel, delay, proceed with the request, or submit the request to batch. If the retrieval operation occurs in an interactive environment, a message is sent to the user. If in batch, a message is sent to the message queue used for notification, as determined by notification controls in the BRMS/400 system policy (by default, QSYSOPR). The message that is sent identifies the file member that is being retrieved, its size, the number of the ASP to which it is restored, and the ASP utilization before and after the restore. This option is best used for interactive applications where the users want to retain some sort of control of system resources. Typically, the user needs some knowledge of the system on which their applications are running to make informed decisions. Batch applications that may retrieve large files can benefit from this option. In this case, the system operator needs to understand the implications of confirming a retrieve operation. A user exit program can be written to customize the message display shown and the processing that occurs if the *VERIFY option is used. For additional If the operation fails because the object type is not supported by BRMS/400 (for example, it is a *PGM type object that has been archived with storage freed), or it is not in the BRMS/400 archive inventory (if perhaps the user performed a save with storage freed outside of BRMS/400), the user is sent the standard OS/400 message for this condition (message ID varies depending on object type). It is not apparent that BRMS/400 has been consulted at all. This situation requires standard application “object has had its storage freed” error processing. There are some circumstances where a *FILE object that has been archived by BRMS/400 may fail. This can be in the situation of a restore failure (media error or operator takes the cancel reply to the tape mount message), or if BRMS/400 submits the retrieve to batch, or it is delayed to occur later. In these cases, the CPF4102 message is relayed to the application. Your application needs error handling to hide this from the user so that it can recover gracefully from these conditions to allow a retry of the operation later when the completion message is sent. Note 232 Backup Recovery and Media Services for OS/400 information on this user exit, see Complementing AS/400 Storage Management using Hierarchical Storage Management APIs, SG24-4450. • *NOTIFY: Using this value causes the restore request to be executed with minimum operator involvement. For example, if a batch job attempts to open an archived file member, the file member is automatically retrieved (restored) with no delay or operator involvement, except as needed to inform the user (or system operator if batch operation) that the retrieval is occurring, and to notify as necessary for mounting media or indicating failure. This option allows maximum seamlessness when implementing Dynamic Retrieval. The user or operator is simply made aware of the fact that a restore is taking place by a message on the last line of the display for interactive users or on the BRMS/400 notification message queue for batch jobs. No decisions need to be made by an application user. This option is best used when you are sure that there will not be a significant number of retrieves performed that may impact total system performance. The *NOTIFY option is ideal when a tape library is available so operators do not need to be involved in tape mount operations. Additionally, it is good for less knowledgeable users who are not comfortable with making a decision when they are presented with the *VERIFY message display. • *DELAY: In the retrieval policy or the SETRTVBRM command, *DELAY can be specified so that when an archived file is encountered, the file is “marked” to be restored at a later time. In addition, when using any of the other retrieve modes, if a restore exceeds the ASP threshold, it is disabled and the retrieve is processed as an implicit *DELAY. In either case, the BRM1823 message is sent to the user indicating that the restore has been delayed and that the file cannot be used until it is restored. The application needs to handle the CPF4102 condition generated when opening the file. The unretrieved file is tracked by BRMS/400. The Resume Retrieve using BRM (RSMRTVBRM) command and the Resume Retrieve display can be used to easily identify files whose retrieve mode is *DELAY, and the user can request that the retrieval operation for one or more of them should be performed or cancelled. Once the files are restored, a message is sent to inform the user. The delay mode can be used for users of applications who have lower priority or who perhaps submit a lot of batch transaction processing (for example, large queries that are not business critical). • *SBMJOB: This is for interactive users only. This mode indicates that the retrieval operation is to be submitted as a batch job. The BRM1823 message is sent to the user indicating that the restore has been requested and that the file cannot be used until it is restored. The application needs to handle the CPF4102 condition generated when opening the file. Once the file restored, a message is sent to inform the user. This option is particularly useful for users of applications that deal with large files and have multiple functions. If a user requests an operation and the file to be retrieved is large, the batch submission of this retrieve job allows the user to temporarily abandon the request and move on to process a similar request with different data or a totally different function. This option may help improve the general productivity of application users (and, therefore, the overall performance) over *NOTIFY or *VERIFY modes. • *NONE: This allows you to bypass retrieve processing. Chapter 12. Planning for the hierarchical storage management archiving solution 233 Beginning with V3R2 and V3R6, BRMS/400 allows you to specify an object retention value for the number of days you want to keep the retrieved object on the system. By default, the object is kept on the system indefinitely. You can specify the number of days that retrieved objects should remain available before their storage is freed by the STRMNTBRM (BRMS/400 maintenance) command. The number of days can range from 1 to 9999. 12.9 Operations that invoke retrieval The basic rule of thumb for understanding what type of operation initiates the retrieve function revolves around what is known as a “database open”. The function must be operating on a database or source file (object type *FILE) and must be attempting to access (or prepare for access of) the data portion of the object. Typically, this includes: • Database Open: Any type of database open, whether explicit (for example, the execution of the CL command OPNDBF) or implicit (for example, starting an RPG program). If the operation includes activity that sets up a database file for read or update by setting up an open data path in the job's process access group, this qualifies. OPNDBF is the easiest method to force a file to be retrieved. Many of these can be set up in a simple CL program with corresponding CLOF commands if you want to retrieve a number of files together before an application starts. A database open that uses a DDM file qualifies for Dynamic Retrieval on the remote (target) system. However, it initiates a Dynamic Retrieval operation based on the Retrieve confirmation for a batch operation. *VERIFY sends a message to the BRMS/400 notification message queue on the remote system. *NOTIFY causes the retrieve to happen immediately. *DELAY works as described, which means that the DDM file open request ends in error the first time it is run. *SBMJOB is not valid for the batch option; therefore, it is not suitable for a DDM file request. • Query/400: Either the “Specify File Selections” part of defining a query or the actual running of that query over a file initiates a retrieval. • Open Query File: Processing the OPNQRYF command causes a database open and, therefore, initiates a retrieval. • SQL/400: File selection during an interactive SQL query set up or executing SQL statements on a file initiates a retrieval. • DSPPFM: The display physical file member command attempts to display the file record data and, therefore, initiates a retrieval. • DFU: The data file utility initiates a retrieval when performing file selection during a temporary or permanent DFU program build or while starting a DFU program. • Journal changes: Applying and removing journal changes to a file causes a file open and, therefore, initiates a retrieval. • CRTDUPOBJ: The create duplicate object command attempts to access member data even when the “duplicate data” parameter is set to *NO. • Client Access/400 file transfer: Client Access/400 (formerly known as PC Support) file transfer invokes a normal database open prior to the transfer of records down to the PC. However, it initiates a Dynamic Retrieval operation based on the Retrieve confirmation for batch operation. *VERIFY sends a 234 Backup Recovery and Media Services for OS/400 message to the BRMS/400 notification message queue. *NOTIFY causes the retrieve to happen immediately. *DELAY works as described, which means that the file transfer request ends in error the first time it is run. *SBMJOB is not valid for the batch option; therefore, it is not suitable for a file transfer request. • CPYF: Because the Copy File command is most often used to copy the records in the file, this causes the member to be opened and initiates a retrieval. • Network file transfer: SNDNETF sends the object description as well as the data records so this initiates a retrieval. • SAVxxx ACCPTH(*YES): When saving using SAVOBJ or SAVLIB, or with any BRMS/400-managed save where the access path parameter (ACCPTH) is set to *YES, this invokes a retrieve operation. If you are using ACCPTH(*NO), the Dynamic Retrieval function is not invoked. • CRTxxxPGM: Program source code in a source file member requires access to the source statements and is retrieved if a compile is performed on it. 12.10 Operations that do not invoke retrieval Operations that you might think invoke retrieve but, in fact, do not include: • DSPOBJD: Displaying the object description only touches the description part of the object and, therefore, does not attempt to access the data part that is freed. • CHGOBJD/OWN/AUD: Changing of the object description, owner, or audit level only affects the object description. • CHGPFM: Changing the member information only affects the file description. • DSPFD and DSPFFD: Displaying the file description or file field description only fetches file description data. • RNMOBJ/M: Rename does not invoke a retrieval because it does not touch the object data portion. However, renaming may prevent a retrieval from ever happening again because BRMS/400 only uses the object and member names to reference for retrieve operations. Renaming breaks the link between the object description on the system and the object data on the tape. See 12.15, “Renaming and moving objects” on page 243, for more information. • CHKOBJ: Check object only checks the object's existence and verifies the user's authority to the object before trying to access it. This does not involve reading any data records. • MOVOBJ: Object movement commands do not cause a database retrieval, although similar restrictions apply when using MOVOBJ with archived objects as with RNMOBJ/M. • ADDPFM and RMVM: Adding or removing members only affects the member attributes of a file that are stored in the file description. A file member that is removed no longer initiates a Dynamic Retrieval operation if an open is attempted for it. • Start/end journaling: Starting or ending journaling of a file does not touch the data at all. Only the receiver, journal, and object description are updated. • CHGPF: Changing physical file attributes only affects the object description. Chapter 12. Planning for the hierarchical storage management archiving solution 235 • DLTF: When deleting a file, all members are removed and no retrieval is necessary. Of course, this means that if access to this file is attempted later, it will fail. • RCLSTG: The reclaim storage operation requires access to the file member but bypasses the retrieve operation. • DSPLOG: Even though it seems logical that the system history log files are needed when using the DSPLOG command, any of these files that have been archived with storage freed are simply bypassed as if they did not even exist. This may change in a future release to support Dynamic Retrieval. • Options from PDM: When using the Programming Development Manager (PDM) product, options from its Work with displays that perform actions on file members have additional checking that prevents a Dynamic Retrieval from occurring. An error message is displayed on the last line of the display. This may change in a future release to support Dynamic Retrieval. • CRTxxxPGM: Any program compile that references an archived database file does not actually access the data. The field descriptions (the only part needed by the compiler) are held in the object description, and therefore, no Dynamic Retrieval is done. 12.11 Applying journal changes to archived data files When you apply or remove journal changes to or from an archived file, this file is automatically retrieved at the first attempt to apply or remove. It is not necessary to open the file in preparation. The only consideration is when the journal entry for the storage free operation is included in the block of sequence numbers to be processed in the apply or remove. Typically, this occurs during a RMVJRNCHG operation where the Starting sequence number parameter contains *LAST and the Ending sequence number parameter contains a number that is related to the point in time to which the file changes must be rolled back. The journal entries in this range include the BRMS/400 archive (and storage free) operation. A RMVJRNCHG command cannot roll back this type of operation and will fail. You need to perform a DSPJRN (display journal) command to select a range of journal entries that do not include the storage free operation. For example, the following output is created from the DSPJRN command for a file that is updated and then archived. The code for a storage free operation is MF. Seq Code Type Object Library Job Time Comments 9 F OP PAYROLL1 PAYROLL PAYROLL 15:26:07 Open 10 R UB PAYROLL1 PAYROLL PAYROLL 15:26:12 Update 1 11 R UP PAYROLL1 PAYROLL PAYROLL 15:26:12 12 R UB PAYROLL1 PAYROLL PAYROLL 15:26:19 Update 2 13 R UP PAYROLL1 PAYROLL PAYROLL 15:26:19 14 R UB PAYROLL1 PAYROLL PAYROLL 15:26:24 Update 3 15 R UP PAYROLL1 PAYROLL PAYROLL 15:26:24 16 R UB PAYROLL1 PAYROLL PAYROLL 15:26:26 Update 4 17 R UP PAYROLL1 PAYROLL PAYROLL 15:26:26 18 R UB PAYROLL1 PAYROLL PAYROLL 15:26:29 Update 5 19 R UP PAYROLL1 PAYROLL PAYROLL 15:26:29 20 R UB PAYROLL1 PAYROLL PAYROLL 15:26:32 Update 6 21 R UP PAYROLL1 PAYROLL PAYROLL 15:26:32 22 F CL PAYROLL1 PAYROLL PAYROLL 15:26:36 Close 23 F CL PAYROLL1 PAYROLL PAYROLL 15:26:36 24 F MS PAYROLL1 PAYROLL 15:27:54 Save 25 F MS PAYROLL1 PAYROLL 15:28:32 Save with 26 F MF PAYROLL1 PAYROLL PAYROLL 15:28:34 storage free 236 Backup Recovery and Media Services for OS/400 OP = member opened UB = update - before image UP = update - after image CL = member closed MS = member saved MF = storage for member freed The RMVJRNCHG command with default parameters is coded as: RMVJRNCHG JRN(PAYROLL/PAYROLL) FILE((PAYROLL/PAYROLL1)) FROMENT(*LAST) TOENT(*FIRST) This causes the following messages to be issued: CPI7001 Remove failed. 0 entries removed from member PAYROLL1. CPF7049 Cannot perform operation beyond journal entry 26. To roll back changes to a storage freed file member, you need to issue the RMVJRNCHG command as follows: RMVJRNCHG JRN(PAYROLL/PAYROLL) FILE((PAYROLL/PAYROLL1)) FROMENT(23) TOENT(17) The following messages are issued: CPC3727 1 objects restored. 0 objects excluded. CPC7050 2 entries removed from member PAYROLL1. Note: The receiver entry range from 17 to 23 did not include the MF (free storage) entry. 12.12 Member level changes to files Performing member level changes to a file with archived members is straight forward. When members are retrieved, they are retrieved individually and independently of the other members within the file. The file structure is not affected, and no data is compromised. • ADDPFM: When adding a new member, the new member's description joins the others within the file because they have not been deleted by the archive with save with storage freed. A retrieval of other members does not affect any new ones. • RMVM: When removing a member, the member description is deleted, and BRMS/400 can no longer retrieve it because the reference (by name) in BRMS/400 to the archived tape copy no longer exists. • RNMM: Renaming a member creates problems when trying to retrieve the member that has been renamed because BRMS/400 references its archive inventory by name. See 12.15, “Renaming and moving objects” on page 243, for more details. 12.13 Retrieval performance The performance of the retrieve function varies according to what you retrieve and how you retrieve it. 12.13.1 Saving access paths when archiving When using BRMS/400 to archive data (using archive with save with storage freed), the option to save the access paths of the file defaults to *YES. This is to Chapter 12. Planning for the hierarchical storage management archiving solution 237 avoid lengthy access path rebuilds at the time of retrieve to attempt to optimize the retrieve function performance. Having this parameter set to *YES is particularly important when you use the *NOTIFY and *VERIFY retrieve modes as if it were set to *NO. The access path rebuild time can add to the user wait time. While the save access path parameter may still be overridden, we suggest that you leave it at *YES to save the access paths and improve the retrieval performance. 12.13.2 File size The retrieval of large files naturally takes longer than smaller files. It may be appropriate to break your large files into several smaller ones. Choosing how to divide the file may not be easy. You must address the following points: • Where are the logical boundaries? You may be able to break down the file into groups of records with a common theme. But how do you re-introduce a group of records to the main file? • How transaction-based is the application? If the records tend to expire in groups, you may have an opportunity to establish break points. • Can the application stand a file name change? Should you group common records in different files? • Can the application stand a member name change? Should you group common records in different members within the same file? • How normalized are the data entities? Can you reduce the size of the file by further normalizing its structure, that is, by splitting the record fields into different files? Be careful to split files appropriately. This solution may be a problem if you have to retrieve of all the files that were split to satisfy a single data request. You have to perform multiple restores, possibly from multiple volumes that may be stored in different locations. All the pre-processing, post-processing, and tape mounting tasks add to the overall time and complexity to retrieve an object. Additional considerations for application customizing are discussed in 12.16, “Application design considerations” on page 245. 12.13.3 Multiple physical files behind a logical file You should take note of the following points where a join logical file causes the retrieval of multiple physical files for a single database request: • Fragmentation: If the various file members under the logical file have been archived at different times, the archived tape copies may be spread across many different volumes. The requested operation may need separate tape mounts, not to mention all the pre-restore and post-restore processing for each file member restore. This impacts performance. • Predicting retrieval size: The nature of the retrieve function is to handle one file member restore operation at a time. Therefore, as one retrieval is being handled, BRMS/400 cannot look forward to predict the next retrievals that are processed, not even if they are obvious to the application designer. BRMS/400 238 Backup Recovery and Media Services for OS/400 cannot display any application knowledge to predict the incoming retrieve operations. This discussion continues in 12.14.1, “Predicting which objects are retrieved” on page 241. The result is that for a given complex (multiple file) operation, you cannot predict either the total size of all of the members to be restored or the time it takes to complete. Therefore, the performance is not predictable. • Access path rebuild times: Access through a multiple format or join logical file may cause the retrieve of several physical file members. As each physical file member retrieve operation is performed separately and independently of any other operations, an access path rebuild for each physical file member retrieved is experienced. Therefore, a situation can occur where a string of access path rebuilds is performed when one final rebuild would suffice. BRMS/400 cannot override the access path rebuild to *DELAY because it cannot predict which physical file members, if any, are retrieved unless BRMS/400 is retrieving in *DELAY mode. BRMS/400 uses the RSTRTVBRM confirm display. See 12.8, “Retrieval methods” on page 231, for retrieve mode details. You should be aware of the potential performance implications of the multiple access path rebuilds that are caused by archiving physical files under a multiple format logical file. 12.13.4 Which retrieve mode to use for interactive applications Certain interactive applications may be critical to your business. They may also be performance critical to your business. It is logical to assume that the best performance for your interactive application can be achieved by using the *NOTIFY mode for retrieving objects. This way, a retrieval is performed immediately at interactive priorities without waiting for a reply to a message. This is certainly true for an application with a fixed logical flow of activities that must be performed to complete a unit of work. If any of the sub-units of this piece of work are temporarily stalled, the user has no option but to wait for completion of that sub-unit. For example, the unit of work may be the processing for a customer placing an order. The first sub-unit may be to retrieve the customer's details, the second to retrieve the stock details of the item required, and the third to create an order. The following assumptions may apply to this simplified scenario: • The order cannot be created (third sub-unit) until stock data can be retrieved for the part (second sub-unit) required because there must be some in stock to honor that order. • The order cannot be created until the customer data can be retrieved for the customer (first sub-unit) because you need their details to fill in certain parts of the order. • Sub-unit one and sub-unit two are totally independent of one another. • You cannot actually place an order until the order file member is online. • You do not know which order file member to open until you have the customer details, and you do not know whether to bother opening an order file until you have the part details. Chapter 12. Planning for the hierarchical storage management archiving solution 239 The logical flow of the unit of work implies that you cannot start sub-unit three until both one and two have completed. It also implies that there is nothing else that can be done while waiting for the order file to be retrieved. Therefore, it seems sensible to use *NOTIFY for creating the order. However, if sub-unit one causes a retrieve operation, it makes sense to move on with something else while the retrieve is being performed; that is, use the *SBMJOB mode to submit the retrieve to batch and then attempt an execution of sub-unit two. Similarly, a retrieval from sub-unit two can be submitted to batch in the *SBMJOB mode while you move on with sub-unit one. Of course, it is not that simple. The logic of the application may have to be changed to allow backing out of sub-units to return to them later. The performance (productivity) penalties for *SBMJOB include: • The user may not return to a sub-unit as soon as the retrieve completion message is sent. • Retrieve jobs may run in lower priority. It is possible to create a special batch environment for retrieve jobs to speed their performance. BRMS/400 uses the job queue named in your job description. You can create a special job queue and reference it in your job description to change it from the default associated with the user profile. The retrieve mode used is typically set at the job level. It is possible to alter the retrieve mode for the entire job by issuing the SETRTVBRM command before each file open operation. This may have unexpected results on other activities within your job, such as group jobs. It also requires changes to your application. You may, however, decide that you are not impacting performance at all by allowing sub-unit three to be submitted to batch. In this case, you may set the mode for this entire job to *SBMJOB. In summary, you can conclude that it is not always best for general user productivity to use *NOTIFY. In some cases, *SBMJOB may be more appropriate. 12.13.5 Using the *VERIFY retrieve mode for batch jobs While the *VERIFY mode offers the best control of your system resources by forcing a decision to be made for every possible retrieve operation, there are a couple of points you need to consider: • In general, you must be sure that the people who have to respond to the *VERIFY messages are informed enough to make the correct decisions. For example, they need to have a good idea about the system size, ASP maps, the size of your object, which types of files are important, what applications are doing, and which applications are important. • For *VERIFY in batch mode, you must be sure that the system operator message queues are monitored frequently. A batch job waiting for an operator message reply is a frequent cause of batch throughput problems. 240 Backup Recovery and Media Services for OS/400 12.14 Managing your disk space An important issue with any automated procedure for adding and removing data from your disk storage is the setting of precautionary checkpoints to avoid storage overflow. The key to managing your storage lies in the balance between the in-flow (retrieve) and out-flow (archive) of data. Some key factors that influence in-flow (retrieve) include: • System activity: The more files that are accessed in a job, the more likely it is that a required file is offline. That is, increasing file access activity statistically decreases the chance of the required file being on disk. This correspondingly increases retrieve activity. • ASP threshold limits: All retrieve operations use BRMS/400 inventory data to predict the size of the retrieved data item when residing on disk. If the result of the retrieve exceeds the ASP threshold, the retrieve is not performed. Setting the threshold high allows more data to be brought online. Setting it low reduces the chances of a retrieve being completed. This can be changed using the Start system Service Tools (STRSST) command. • Retrieve mode: The differing methods of retrieve allow greater control: – *NOTIFY increases the in-flow because users do not have the option to cancel or defer the retrieve. – *VERIFY allows independent user decisions to moderate the in-flow. – *DELAY allows a more informed decision to be made by using the RSMRTVBRM display to obtain a system-wide picture of the retrieval activity. This may help reduce in-flow activity. Some key factors that influence out-flow (archive) include: • Dormancy criteria on archive: The number of days of inactivity that a file undergoes is one of the qualifying parameters for an object to be archived. Increasing the required dormancy period reduces the outflow of data. Decreasing the dormancy period increases the outflow of data. This parameter must be balanced against application performance in the case of a retrieval being requested. If you decrease the dormancy period too much, the out-flow increases to a point that the natural activity of the system causes a corresponding increase in in-flow. As soon as the system becomes clogged up with a constant stream of archive and retrieve operations, performance can be degraded. • Volume of archive lists: Increasing the number of archive object lists and the number of objects on each list increases the chances of an object being archived. • Frequency of archive runs: Shortening the time gaps between each archive run also increases the chances of finding a qualifying archive candidate. If the qualifying dormancy periods are short, we advise that you have shorter time gaps between archive runs. 12.14.1 Predicting which objects are retrieved For any given data operation or request, any number of file members may be needed to be retrieved to complete that request. The BRMS/400 retrieve function Chapter 12. Planning for the hierarchical storage management archiving solution 241 operates independently on each of the required members that have been archived. Each file member is processed separately. Retrieval is performed one file member at a time. This makes it difficult to predict which file members are retrieved to enable successful completion of your data request without some sort of application knowledge. 12.14.1.1 Using *NOTIFY or *VERIFY The retrieve function handles one file member restore operation at a time as dictated by the file opens in a program. The retrieve operation cannot look ahead to predict the next retrieves that need processing. BRMS/400 cannot display any application knowledge to predict the incoming retrieve operations. The result is that for a multiple file retrieve operation, you cannot predict either the total size of all of the members to be retrieved or the time it takes to complete. As a user responding to a *VERIFY mode message, you do not know whether you can wait for the retrieve operation or even whether the retrieve operation is forced to terminate because of an ASP overflow. This is despite the fact that for each individual member, you receive a message informing you of the total restore size. It is the number of other members that are needed for this request that you do not know until they occur. 12.14.1.2 Using *DELAY With the delayed option, you can improve the predictive ability by using the Confirm Retrieve display with the RSMRTVBRM command. This display lists the sizes of all of the objects awaiting retrieval and the ASPs into which they are loaded. You may perform the necessary calculations before actually submitting any of the retrieve jobs. You may select the appropriate objects to be retrieved, cancel inappropriate ones, and leave the low priority ones for later. Not every application can operate in delayed mode. There is definitely a need to balance performance requirements against the storage capacity control requirements. 12.14.2 Predicting the size of objects to retrieve As indicated in 12.14.1, “Predicting which objects are retrieved” on page 241, in the *NOTIFY or *VERIFY mode, a message is sent for each retrieve operation as it is initiated indicating the size and ASP of the object to be retrieved. This is done sequentially. There is no predictive information about future objects that may be retrieved. If you use the *DELAY mode and the RSMRTVBRM command with the confirm parameter set to *YES, you can choose which objects are to be retrieved. The information on the Confirm Retrieve display may be used to total the sizes of the objects for each ASP. 12.14.3 Predicting the time to retrieve objects Predicting the size of the current object to be retrieved is relatively straight forward because BRMS/400 records this information at the time the file member is archived. Predicting the time it takes to retrieve the object, however, is not possible. 242 Backup Recovery and Media Services for OS/400 A number of factors influence the restore time, including: • The object type (each object type has different restore characteristics) • AS/400 system model performance • Other jobs running on the AS/400 system that affect the performance • Tape drive speed • The speed of the IOP to which the tape drive is attached • The use of compression (or not) during the save and the type of compression • Waiting time for other jobs already using the available drives • Time for mount requests to be honored • Where on tape the actual object is located, that is, how much searching must be done to find the start of the object While BRMS/400 gathers data on some of these factors, there is no simple formula that can be used to calculate the restore time. 12.14.4 Can an ASP overflow occur? The retrieve function always checks whether the threshold of the recipient ASP is exceeded as a result of the restore operation. If so, the retrieve does not take place. It fails with an error status of *STORAGE and enters into a delayed status. The failed retrieve operation may be restarted at a later time by using the RSMRTVBRM command. The retrieve function uses data stored in the BRMS/400 archive history to determine the size of an object when it is retrieved. It is based on the size of the object when it last resided on disk. The ASP threshold is derived from the standard system ASP threshold that is set by the STRSST command. On a well-managed system, it is unlikely that any retrieve operation will overflow an ASP. An ASP overflow may occur if: • The ASP threshold is set high (90% or above). • Other independent create or restore operations (for example, CRTDUPOBJ or RSTLIB) begin after the retrieve has started but before it has finished. This may lead to an overflow as the retrieve continues its restore operation. • The object size information is incorrect: – The BRMS/400 data could have been corrupted. – The object may be restored to a different operating system release that may effectively change the required space needed for the object. See Backup and Recovery - Advanced, SC41-4305, for information on managing ASP overflows. 12.15 Renaming and moving objects The BRMS/400 archive history is a name-oriented inventory. If a file's description was retained when the file was archived, and the storage freed file is later renamed or moved to a different library, or the library containing the file is renamed, BRMS/400 cannot automatically retrieve its data. Chapter 12. Planning for the hierarchical storage management archiving solution 243 A rename operation does not invoke a retrieval prior to effecting the name change because it does not touch the object data portion. However, renaming may prevent a retrieve from happening again since BRMS/400 uses the library, object, and member names to reference retrieve operations. Renaming breaks the link that BRMS/400 uses between the object description (on the system) and the object data (on tape). 12.15.1 Renaming file members When renaming a file member, a solution to this problem may involve manually retrieving the object before it is renamed. The steps that are required are listed here: 1. Manually retrieve the object. Open the file member with the OPNDBF command or display the physical file member with the DSPPFM command. This invokes the retrieve operation. 2. Perform the file member rename with the RNMM command. 3. Update the entry for the file member in the BRMS/400 archive lists. If the file member was included under a generic entry, you may or may not need to change this entry. If you change a generic entry, such as MEMB* to MEM*, check that other members are not affected. If the file member was included as an *ALL entry, you do not need to alter the BRMS/400 list unless you also changed the file name or library name. Use the WRKLBRM command if the object was in an archive list. If you are adding a new list, use the WRKCTLGBRM *ARC command to add the new list to your archive control group. 4. Leave the member to be archived in the normal way. If you want to re-archive this member instantly, you need to create a temporary archive list and control group with inactivity set to zero days. You can use the STRARCBRM command to archive the objects defined in the temporary control group. 5. Optionally, delete the archive history for the old member name. Use the WRKOBJBRM command with the appropriate parameters to select the file in which the renamed member existed and remove the history data. It is possible that you no longer need the data, so delete the BRMS/400 archive history to save space. If not, it is deleted when the expiration date is reached. 12.15.2 Renaming files If you rename a file, you must retrieve all of the members within that file, rename the file, and alter the BRMS/400 archive lists to reflect the change. The required steps are outlined here: 1. Manually retrieve all of the members in the file. Open all the file members within the file by using the OPNDBF or DSPPFM commands. This invokes the retrieve operation for each member. You need to know all of the member names to do this. You have to open each member, one at a time. 2. Perform the file rename with the RNMOBJ command. 244 Backup Recovery and Media Services for OS/400 3. Update the entry for the file in the BRMS/400 archive lists. If the file was included under a generic entry, you may or may not need to change this entry. If you change a generic entry, for example FILE* to FIL*, check that other files are not affected. You do not need any unwanted includes or unanticipated excludes. If the file member was included as an *ALL entry, you do not need to alter the BRMS/400 list unless you have also changed the library name. Use the WRKLBRM command if the object was in an archive list. If you are adding a new list, use the WRKCTLGBRM *ARC command to add the new list to your archive control group. 4. Leave the file members to be archived in the normal way. If you want to re-archive these members instantly, you need to create a temporary archive list and control group with inactivity set to zero days. You can use the STRARCBRM command to archive the objects defined in the temporary control group. 5. Optionally, delete the archive history for the old member name. Use the WRKOBJBRM command with the appropriate parameters to select the file in which the renamed member existed and remove the history data. It is possible that you no longer need the data, so delete the BRMS/400 archive history to save space. If not, it is deleted when the expiration date is reached. 12.15.3 Renaming libraries Renaming a library can be done, but is time consuming. If you rename a library, you have to follow these steps: 1. Manually retrieve all of the members in all the files in the library. Open all the file members within every file in the library with the OPNDBF or DSPPFM commands. This invokes the retrieve operation for each member. You need to know the names of all of the members of every file in the library to do this and open each member one at a time. It may be worth considering writing a special program that creates a list of all members in all files in a given library and proceed to open them all one-by-one. There are some file opening limitations within OS/400 and CL that you may run into if there are too many. 2. Perform the library rename with the RNMOBJ command. 3. Update the entries for the all of the members in all of the files in the BRMS/400 archive lists and archive control groups. If the file was included under a generic entry, you may or may not need to change this entry. If you change a generic entry, for example LIBR* to LIB*, check that other libraries are not affected. Use the WRKLBRM command to check whether the library name appears in any archive lists. Use the WRKCTLGBRM *ARC command to change all occurrences of the library name in your archive control groups. 4. Leave all of the file members within the library to be archived in the normal way. Chapter 12. Planning for the hierarchical storage management archiving solution 245 If you want to re-archive these members instantly, you need to create a temporary archive control group with inactivity set to zero days. Use the STRARCBRM command to archive the objects defined in the control group. 5. Optionally, delete the archive history for the old member name. Use the WRKOBJBRM command with the appropriate parameters to select the file in which the renamed member existed and remove the history data. It is possible that you no longer need the data, so delete the BRMS/400 archive history to save space. If not, it is deleted when the expiration date is reached. 12.15.4 Moving a file Moving a file to a different library (with MOVOBJ) has the same effect as renaming that file. Follow the procedure for renaming a file by replacing the RNMOBJ command with the MOVOBJ command. 12.15.5 Creating a duplicate file The creation of a duplicate file with the CRTDUPOBJ command automatically retrieves the file members before copying begins. 12.16 Application design considerations In 12.13, “Retrieval performance” on page 237, and in 12.14, “Managing your disk space” on page 240, we hinted at the types of changes that you may need to make to our applications for more effective use of Dynamic Retrieval. We talked about splitting files up for performance reasons and changing retrieve modes to improve productivity. This section concentrates on the methods of handling the data that you want to archive. It discusses the methods in which our data structures may be changed to accommodate Dynamic Retrieval or improve it. It also discusses the type of customizing that an application may need to adapt to the altered file structures. 12.16.1 Member-level archiving The BRMS/400 Dynamic Retrieval function operates at file member level. Archiving involves the save with storage freed of a file member. Retrieve involves the “on-demand” location and restoring a file member to disk. This section is about achieving the best from your file member-level archive and the application considerations that go with it. 12.16.1.1 Applications suitable for member level archiving It may be that your application is already suited for the use of archiving at file member level. Typical features of such an application include: • Using multiple files: The mass of data used by each functional part of the application is split into many small files. Each file may be related to another file or files, but each application transaction may not necessarily require access to all files. Typically, a file is related to some sub-function of the application's main function. For example, an order processing application may have a file for customer details, a file for part details, a file for stock levels, a file for part prices, and so on. 246 Backup Recovery and Media Services for OS/400 • Using multiple members within those files: Each file has a set of different members. Each member consists of a group of related records. The completion of the function (or sub-function) related to that file involves the selection of the particular member required within that file based on a particular item of information. For example, the part detail file may have a different member for each catalog of parts, each catalog may change each month, and there may be a different catalog for each supplier of parts. The member selection is based on supplier name and current month. • Databases fully normalized: Full normalization of data entities implies that there is no redundancy of data, and that the database is broken down into many files. This allows you to go directly to the actual piece of data you need without touching other data at the same time and, therefore, falsely instructing the system that this other data is also active. For example, as a result of a database design using full normalization techniques, you have a part order file that does not contain direct values of each part's description or price. These are referenced with a part identification number and derived from a separate part details file and part price file. At the time you print the order, you access all three files (orders, part descriptions, and part prices). However, months later, you may perform a manufacturing output analysis where all you need is the part's description (part detail file) and the number of parts sold (order file). This way, you do not touch the price data (part price file) and may leave it in its archived state if already archived, or at least not disturb its dormancy rating. • Effective design of logical files: Similar to the argument for normalizing databases, having done all of the hard work in normalizing using multiple members and multiple files, you must ensure that the design of logical files does not lead to the unnecessary inclusion of unwanted or unneeded data. For example, in the previous order file example, you may design a month-end manufacturing analysis report (to report on numbers of parts manufactured) that uses a logical file already created for a sales analysis report. Therefore, you are saving on design cost, complexity, and access path maintenance by re-using an existing logical file. However, the existing sales analysis logical file also accesses the parts price file to obtain sales figures, but the manufacturing analysis report does not need this information. You are, therefore, accessing the parts price file unnecessarily and restricting its chances of being archived. • Groups of records with commonality spread across multiple members: If multiple members within each file contain records with no common theme, it is difficult to control access to these members. Data requests may result in sequential searches through each member instead of directly opening the correct one. This is an indexing problem. There is little to be gained from setting up multiple members if the access to these members is to be of an entirely random nature. For example, the customer details file can consist of many members, each with 200 customer records. As new customer information is entered, a member fills up. When a member is full, a new one is created. There is no order to this arrangement and every time the customer detail file is queried, all of the members must be searched. However, if the file was split into 26 members and grouped by the first letter in the customer name, a search on customer name only accesses one member. Chapter 12. Planning for the hierarchical storage management archiving solution 247 • Transaction based records: If a record entered into a database is equivalent to a transaction, and that transaction eventually expires through the simple passing of time, you may archive that record with confidence that it does not need to be retrieved. Furthermore, if you can collect these expired records as time passes and a collection of records expire in the same way and at approximately the same time, you can group these records in a file member and archive them at file member level. For example, collect the order records in the order file using a member for each financial year. When the financial year end reports are all completed, you may be reasonably confident that the file member for the past year is not needed again and it may be archived. If these characteristics are found in your application, it is possible that you may not need to make any changes to derive full benefit from the implementation of archiving with Dynamic Retrieval. If these characteristics are not found in your application, it may be worth considering the sort of changes that may need to be applied to achieve this. The main points to consider are: • To break your large files down into multiple files and make good use of normalization, you need to: 1. Re-design your database, including indexing and cross-referencing. 2. Change the naming of your files. 3. Alter the name references in your application source code. 4. Change the application logic that accesses your databases. 5. Change the inquiry logic to reflect the preceding steps. 6. Rebuild and recompile your files and programs. • To make good use of multiple members with common record level characteristics within your files, you may need to: – Design an indexing structure around the members in each file. This involves all of the re-designing, renaming, logic updating, and re-compiling that was previously described. – Use a member list structure that lists the names of the members present in a file and the order in which they should be searched. This requires an exit program to be used in place of the member open statements within your programs that finds the right member and opens it. It may also require the repositioning of the open statements to a point in the program logic where the relevant search data is available. • To make better use of logical files, you may need to create additional logical files and change the names of the files opened in some programs. • To split members off at time intervals to enable the migration of blocks of historical data, you need to establish a naming convention for the members and change the search logic to access the most recent members first. 12.16.2 Work-around for less suitable applications When an application is not suitable for member level archiving, you may need to make some changes. The options are: • Leave it as it is. You can only make limited use of archiving with Dynamic Retrieval. 248 Backup Recovery and Media Services for OS/400 • Change the application design to reflect the characteristics described in 12.16.1.1, “Applications suitable for member level archiving” on page 245. • Design a work-around solution that minimizes changes to the application, but uses special exit programs to intercept database requests, and to manage files and their members. • Design a work-around at record level that minimizes application change. See 12.17, “Pseudo record-level archiving” on page 254, for a discussion on this. The remainder of this section concentrates on the design of a member level work-around for database access and file member management and the considerations that go with this approach. 12.16.2.1 Splitting data in your files The grouping of data records is the chief method by which you may break down your previously unmigratable files into blocks of possibly migratable data. You can split the data both vertically and horizontally. 12.16.2.2 Vertical splitting This involves breaking up the record formats into several smaller formats. You have to use the fields of a large record to create several smaller records consisting of a few fields each from the single large record. This is often seen as the result of database normalization. Figure 146 shows a typical result of vertically splitting data by breaking a record into several smaller records. Figure 146. Vertical data splitting FIELDA FIELDC FIELDD FIELDE FIELDF FIELDG FIELDH FIELDJ FIELDA FIELDE FIELDI FIELDB FIELDC FIELDD FIELDH FIELDJ RECORD2 RECORD3 FIELDF FIELDG RECORD4 RECORD1 Vertically split into several records: The single large record: Chapter 12. Planning for the hierarchical storage management archiving solution 249 12.16.2.3 Using vertical splitting Vertical data splitting should be used if possible. If records contain redundant data, the traditional normalization techniques may help here. You may also want to attempt to split records into groups of fields according to their statistical usage. If there are clearly certain fields that are hardly ever used (assuming that they should be there at all), they may be split off into a separate record format, therefore, creating a brand new file. Once again, pay attention to the statistical variation of the use of these fields with time. Only if it can be shown that certain fields are being used less as time goes on, is it appropriate to remove these fields from the file. An example is a change in the use of an application over time. Suppose that the record format of the database in your payroll application included several fields of information for the clerk that processed each pay check. These fields are there because the clerks used to manually process part of the paperwork before the entire process became computerized. They are still used infrequently when a manual correction is needed. It is possible, therefore, that the usage statistics for these fields are decreasing as time goes on and the accuracy and competency of the payroll application increases. These fields may be vertically split off and placed in a separate file, and that file is archived to tape. Grouping your fields The grouping of fields is a process of balancing three aspects of the application design: 1. Migrate low volatility fields. 2. Reduce data redundancy (by normalization). 3. Increase data access performance. All three may conflict. While it is possible that normalization and vertical splitting for migration may have similar targets, it is also possible that the choice of which fields to split off may differ significantly. Sometimes vertical splitting may be an extension to the normalization process. The full normalization process may also involve the introduction of additional identification fields in each new record format to aid indexing and cross-referencing between records, such as customer number, part number, and so on. The key result of vertical data splitting is the creation of new files within your application. The original file, with its single large record format, is converted into several smaller files. Each small file has its own smaller record format. Logical views may still be constructed to simulate the old record format using multiple formats or joined record formats. The logical views help reduce the impact on code changes. However, if all of the new records are always accessed for every transaction, you may have not gained any enhanced ability to archive some of the less active fields. Note 250 Backup Recovery and Media Services for OS/400 Performance of database access can sometimes be reduced by opening multiple files, using join logical files, and other match and join techniques built into the application code. This is in direct conflict with normalization and vertical splitting. Application changes The application changes needed when creating new record formats and files include: • Redesigning the data description specification (DDS) source for all included data files. • Redesigning the data description specification (DDS) source for all included screen files. • Designing new logical files over new physical file record formats. • Redesigning application display handling code. • Changing file names and record formats and record handling in application database access code. • Re-compiling all involved files, displays, and program modules. 12.16.2.4 Horizontal splitting This involves the grouping of records within a file into separate members. Normally there is some implicit (or possibly explicit) indexing of the records that allows us to split the records into different members based on breaks in the primary (or perhaps even secondary) key. Figure 147, Figure 148, and Figure 149 on page 252 show examples of this. In Figure 147, you see an entire, unsplit file that is keyed on two fields: a primary key and a secondary key. Figure 147. Horizontal data splitting When using DB2/400 to access records in a file, it only allows you to open one member at a time. Other members can be accessed with DB2/400, but only after issuing the OVRDBF command to select the member required. This is important to understand if you are considering using horizontal splitting. Note ALPHA ALPHA BRAVO BRAVO BRAVO CHARLIE ALPHA ALPHA ALPHA ALPHA CHARLIE FOXTROT FOXTROT FOXTROT BRAVO ECHO ECHO ECHO ALPHA ALPHA ALPHA ALPHA ALPHA ALPHA BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO CHARLIE CHARLIE CHARLIE CHARLIE ADIRGJLSGSGSLKGJSLVNSDLKNVLSDNVLSDNVLSVNSL QWERJHGJKBNFSDBKLJNELKGNSJLKNSKJNCVSKJVNS SDKGSLDGNSLDHSDLHSDHSDJLHSDFGHSDBSKLBASVL LKSDJGKLSDHLSDHSDKHSDKGHBSKHFKADHFKADBFKA LSKDNFGKLSJHJKLASDHDSBKSDFJBGKSDJBSDJKGBS WDGHKSDFSJDFNSDFNSDAKBSDKJSDFSKDJBKSJDBFD KJSDFJKASDBFASDJKBFDJKABFAKBBFKADBFADKFBBF LKSDJFLSDNLSDHGSLDGHGHDHSLJKDFHSLDFHSKADH KSDFKLSFLASDNSGBNDGJSGBNSBGJSDKSDBFGKSJSD JAFAFAKDJFHKAJFHAKBAKFBAKBFAKBFAKBFAKJKAS LKSHDFLKSHSDLGHSDKGHSDKLJHSDKHSDGKJSDFHDJ LKJSDFGLSKGSDGSDLGHLSDGHSDLGHSGLJHSDFSDDF LKSDJGLKSDNLNGSJKLDGSGKLJSGNBSDJBGKJSDFHD KKSNDFLSKJHSGIUKURFSBNVSVBSIGUCVSKEJFSDUD LKSDJFLSDFLSDHSDJKLHKJFHSDKGHSDGJKHSDJDSF LKSDFLKSDLNSLFGSLJGNSDLJNFALFNASLFNBSAJKL KJSNVJSVNKSAJVDKSAJDNVKSJADVNKSJAVNKJSVSD LKSDHFKSJAFKSDGKJBASDKJBASDVJVKJSKJSADVKS Primary key Secondary key Other fields Chapter 12. Planning for the hierarchical storage management archiving solution 251 Figure 148 shows an example of horizontal data splitting by primary key. The single member file is split into multiple members using the change in the primary key as the boundary for each member. In this case, the primary key becomes redundant. Figure 148. Horizontal data splitting by primary key Figure 149 on page 252 shows horizontal data splitting by all keys. The single member file is split into multiple members using changes in any key as the boundary for each member. In this case, both the primary and secondary keys become redundant. ALPHA ALPHA BRAVO BRAVO BRAVO CHARLIE ALPHA ALPHA ALPHA ALPHA ALPHA ALPHA DIRGJLSGSGSLKGJSLVNSDLKNVLSDNVLSDNVLSVNSL QWERJHGJKBNFSDBKLJNELKGNSJLKNSKJNCVSKJVNS SDKGSLDGNSLDHSDLHSDHSDJLHSDFGHSDBSKLBASVL LKSDJGKLSDHLSDHSDKHSDKGHBSKHFKADHFKADBFKA LSKDNFGKLSJHJKLASDHDSBKSDFJBGKSDJBSDJKGBS WDGHKSDFSJDFNSDFNSDAKBSDKJSDFSKDJBKSJDBFD Primary key Secondary key Other fields MEMBER1 BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO BRAVO ALPHA ALPHA ALPHA ALPHA CHARLIE FOXTROT FOXTROT FOXTROT KJSDFJKASDBFASDJKBFDJKABFAKBBFKADBFADKFBBF LKSDJFLSDNLSDHGSLDGHGHDHSLJKDFHSLDFHSKADH KSDFKLSFLASDNSGBNDGJSGBNSBGJSDKSDBFGKSJSD JAFAFAKDJFHKAJFHAKBAKFBAKBFAKBFAKBFAKJKAS LKSHDFLKSHSDLGHSDKGHSDKLJHSDKHSDGKJSDFHDJ LKJSDFGLSKGSDGSDLGHLSDGHSDLGHSGLJHSDFSDDF LKSDJGLKSDNLNGSJKLDGSGKLJSGNBSDJBGKJSDFHD KKSNDFLSKJHSGIUKURFSBNVSVBSIGUCVSKEJFSDUD Primary key Secondary key Other fields MEMBER2 CHARLIE CHARLIE CHARLIE CHARLIE BRAVO ECHO ECHO ECHO LKSDJFLSDFLSDHSDJKLHKJFHSDKGHSDGJKHSDJDSF LKSDFLKSDLNSLFGSLJGNSDLJNFALFNASLFNBSAJKL KJSNVJSVNKSAJVDKSAJDNVKSJADVNKSJAVNKJSVSD LKSDHFKSJAFKSDGKJBASDKJBASDVJVKJSKJSADVKS Primary key Secondary key Other fields MEMBER3 252 Backup Recovery and Media Services for OS/400 Figure 149. Horizontal data splitting by all keys Horizontal data splitting tends to create new members within the same file because the record format for each group of records is identical. ALPHA ALPHA ALPHA ALPHA DIRGJLSGSGSLKGJSLVNSDLKNVLSDNVLSDNVLSVNSL QWERJHGJKBNFSDBKLJNELKGNSJLKNSKJNCVSKJVNS Primary key Secondary key Other fields MEMBER1 BRAVO BRAVO BRAVO ALPHA ALPHA ALPHA SDKGSLDGNSLDHSDLHSDHSDJLHSDFGHSDBSKLBASVL LKSDJGKLSDHLSDHSDKHSDKGHBSKHFKADHFKADBFKA LSKDNFGKLSJHJKLASDHDSBKSDFJBGKSDJBSDJKGBS Primary key Secondary key Other fields MEMBER2 ALPHA CHARLIE WDGHKSDFSJDFNSDFNSDAKBSDKJSDFSKDJBKSJDBFD Primary key Secondary key Other fields MEMBER3 BRAVO BRAVO BRAVO BRAVO ALPHA ALPHA ALPHA ALPHA KJSDFJKASDBFASDJKBFDJKABFAKBBFKADBFADKFBBF LKSDJFLSDNLSDHGSLDGHGHDHSLJKDFHSLDFHSKADH KSDFKLSFLASDNSGBNDGJSGBNSBGJSDKSDBFGKSJSD JAFAFAKDJFHKAJFHAKBAKFBAKBFAKBFAKBFAKJKAS Primary key Secondary key Other fields MEMBER4 BRAVO CHARLIE KSDFKLSFLASDNSGBNDGJSGBNSBGJSDKSDBFGKSJSD Primary key Secondary key Other fields MEMBER5 BRAVO BRAVO BRAVO FOXTROT FOXTROT FOXTROT LKJSDFGLSKGSDGSDLGHLSDGHSDLGHSGLJHSDFSDDF LKSDJGLKSDNLNGSJKLDGSGKLJSGNBSDJBGKJSDFHD KKSNDFLSKJHSGIUKURFSBNVSVBSIGUCVSKEJFSDUD Primary key Secondary key Other fields MEMBER6 ECHO ECHO ECHO CHARLIE CHARLIE CHARLIE LKSDFLKSDLNSLFGSLJGNSDLJNFALFNASLFNBSAJKL KJSNVJSVNKSAJVDKSAJDNVKSJADVNKSJAVNKJSVSD LKSDHFKSJAFKSDGKJBASDKJBASDVJVKJSKJSADVKS Primary key Secondary key Other fields MEMBER8 CHARLIE BRAVO LKSDJFLSDFLSDHSDJKLHKJFHSDKGHSDGJKHSDJDSF Primary key Secondary key Other fields MEMBER7 Chapter 12. Planning for the hierarchical storage management archiving solution 253 12.16.2.5 Using horizontal splitting Horizontal splitting is best used when you have a large number of records in your database and there is a clear and obvious key to use to mark the divide. This key must have these properties so that when the records are grouped by the breaks in the key, the resulting members have independent (and preferably different) activity characteristics. For example, you may consider keying all entries in an accounts payable file by the purchase date and breaking the entries at the change of each month. As the records become older, they are less frequently used. This implies that entire members may become eligible for archive as each member contains records from within the same month. However, if you decide to key your customer detail file by the customer name and break this down by the change in the first letter, the result is 26 members all with the same approximate activity levels. While you may find differences in the activity of the “X” member to the “B” member, neither member changes significantly with respect to the time that has elapsed. That is, you probably do not find that the “X” member's relative activity has decreased after six months have elapsed any more than the relative activity of the “B” member. In short, the statistical activity of each and every member in this file does not vary with time. Grouping your records To create new members, you must group records in collections that share common statistical activity variations with time. To do this, you must: 1. Establish the key: You must try and balance the two (sometimes opposing) requirements for the key: • Most common searches by the user: It makes sense that the key you use is aligned to the type of searches and access requests that the database needs to honor, either directly or indirectly, as a result of user requests. • Index breaks separate statistical groups of records that have activity patterns that vary with time: Putting it simply, the breaks in the keys must try to split active sets from dormant sets of records. 2. Choose the break level: You need to choose at which level of the key you want to break (primary, secondary, and so on) and how you want to specify the break. This depends on: • Size of large file • Number of records in large file • Size of required members • Number of records required in each member • Predicted activity patterns for each member Application changes The key changes you need to make in your application to deal with horizontal data splitting include: 254 Backup Recovery and Media Services for OS/400 • Setting up a member search list, listing the names of members to search for a hit, and the order in which you search them. Include the most active ones first and the most dormant last. You may have different lists for different sub-functions on the same file. • Coding the changes needed to deal with the member search list to maintain and update it. • Coding the changes needed to use the list to have a successful search while minimizing the need to retrieve archived data. 12.17 Pseudo record-level archiving This section concentrates on methods for minimizing changes to our current applications while delivering the most effective possible Dynamic Retrieval solution. It talks principally about the use of special programs to perform the transfer of individual records to and from the application's main files. This section is primarily concerned with making records available for migration without changing the file structure of the main application and explaining how to retrieve those records when they are required. The main benefit from this approach is the successful archiving and Dynamic Retrieval of records for applications that do not normally lend themselves to this type of solution while also minimizing the impact of changes to that application. Some questions that you need to answer during this process include: • How are the records to be copied to another file for migration and what indexing data should we maintain for searches? • When are they to be copied? What triggers the process? • Which records should be selected for migration? • How are the archived records to be retrieved? • What triggers the process for retrieve? • What search criteria can we use for accessing records and when do we decide to search the archived data (instead of just the online data)? • How can we group the records (index them) into groups with common statistical activity? • What direct code changes must we make? • How much database “get” intercept code do we need? 12.17.1 Moving records to an archive file member The principle method for archiving records is by copying selected records to a different file or file member and allowing this file member to be archived. This You should be principally concerned with horizontal data splitting, but, in this case, you are performing the splitting actively (and in an ongoing manner) by constantly moving records to and from tape. The previous section concentrated on splitting the data in the design stages by redesigning the file structures, members, and the application itself. Note Chapter 12. Planning for the hierarchical storage management archiving solution 255 method effectively removes these records from the active scope of the application. Part of our discussion focuses on how to do this. You need to consider the following points when you design your record migration function: • What programs perform the migration? The user application may be modified to include a set of statements to select, copy, and delete certain records within the main files. These statements are best coded to be as flexible, generic, and re-usable as possible. This migration is performed by a background job. This job acts as a database manager that starts at predefined intervals, performs a transfer, and goes back to sleep. Alternatively, this job can constantly monitor the file activity and prepare certain records for transfer. • How are the migration programs triggered? Whichever way the migration is performed, you need to establish clear triggering criteria for migration activity. The triggering should depend on the type of file activity and granularity required. It can be based on: – When a file member reaches a certain size – Repeated at fixed times (for example, 2:00 a.m. every day) – At user request – When a certain volume of migratable records accumulates The last option is a more intelligent triggering mechanism, but relies on the ability to time-stamp each record individually, which is not part of the OS/400 database function. • Which records does the migration programs select? This is really a process of active horizontal splitting. The data splitting is performed real-time. You need to index the records and select certain records based on their key values. Once the first migration is performed, the indexes and selection criteria are fixed. The only way of changing them is to retrieve all archived records first. If each record is time stamped, you may be able to index on the time stamp and select the oldest records for migration. See 12.16.2.5, “Using horizontal splitting” on page 253, for more considerations about horizontal splitting. • How do we arrange the migration files? Do you need to create a new member every time you archive more records? If you create a new member each time, you must establish a naming convention for each member and a chain of members. This is much the same as the chain of receivers that you set up for journaling. Each member name contains a generic part and a sequence number part, such as ARCMEM0001, ARCMEM0002, and so on. You also need an index of these members to keep the chain intact and to provide an order in which to sequence them. This is similar to the journal object itself that tracks the receiver chain that it feeds. The archived member chain list can also be used as a type of member list with which a search is initiated for a record. This is the same principle with which OS/400 may search for an object using a library list. If you insist on using the same member for each archive operation, you must either delete all of the previously archived records each time a new group is archived, or retrieve the previously archived records and append the new 256 Backup Recovery and Media Services for OS/400 ones to this member and re-archive the member. If you choose to delete the previously archived records, you really mean delete them (that is, expire the tape on which they reside so that they really are gone forever, no copies on disk, tape, or anywhere). If you decide to append to the archive member, you should be aware that this forces a retrieve of all of the records ever archived every time you want to append. This almost defeats the purpose of archive and retrieve except for situations where this archive process is only performed infrequently and there is enough temporary disk space to accommodate the retrieved member. You may also need to temporarily clear out the really old records from this member, since you can never expire the tape (on which it resides as a whole) because this loses all of the archived records, including freshly archived ones. • In what form do the copied records remain in the main file? Once you have selected the records in the master file and copied them to the archive file, what do you do with them? You have three options: – Delete them entirely: In this case, you have no direct reference to indicate whether a record is archived or simply does not exist. Therefore, if a non-existent record was requested, the search may have to go through all of the archived records to find out that this record does not exist. This results in retrieving all records for every unsatisfied query to the main file. The effect of this is reduced by establishing a list (or chain) of archive members through which the search can interrogate. If the records are keyed by a time stamp, it is possible to place a time scope within which the search must remain. A less sophisticated version of this involves naming the last allowable member in the chain that is searchable. You may want to combine the idea of searching through a chain of archived members with the *VERIFY retrieve mode allowing the user to stop the search at any point (by member) by cancelling the retrieve operation. – Keep record stubs in the main file: You can delete all field information relating to a record except for its keys fields. This leaves a stub that uniquely identifies that record and can be used to check the record's existence without querying any archived files. This works similarly to a save with storage freed, and you call it “save record with storage free”. You may need to add a special flag field to each record to indicate whether it has been archived or establish a special condition to indicate migration status. You only save disk space if the record consists of multiple formats and parts of the joined format can be totally deleted from their own physical files. If the record sits in one large record format, all you can achieve is to blank out a number of fields that do not conserve storage space. Another opportunity is to use variable-length records. This way, only the key data takes up space, and the less active data can be removed. Therefore, the file makes more efficient use of storage. – Create an index file: You can create a separate physical file containing a record format that includes only the key information from the main application file, a migration Chapter 12. Planning for the hierarchical storage management archiving solution 257 status flag, and a name of the member that contains the record (if archived). This is, in effect, creating a home-made access path. For every record that is in or has been in the main file, there is an entry that either points back to the main file (online) or to the archived member that contains the record (and perhaps even the relative record number within that member). This has the advantages of definitely saving storage space, only accessing the archived members that are absolutely necessary, and never performing a retrieve for requests for non-existent records. However, there is a maintenance issue in that you must reflect in the index file, every change made to the main file by any job at any time. You may need to restrict access to the file through your formal record handling programs. 12.17.2 Retrieving records and integrating into the main file Depending on your migration approach, the retrieval and subsequent use of archived records can be difficult. It may be easier to tabulate the retrieval suggestions based on the migration methods. Considerations for Dynamic Retrieval and integration into the main file of database records from a list of archive members is shown in Table 5. Table 5. Dynamic Retrieval of records into main file of database records Action to be taken Records are deleted from the main file Records are saved with files freed Homemade access path Trigger: How is the search started? Any record get request, such as read or chain, is intercepted by a special record management program. Any record get request, such as a read or chain, is intercepted by a special record management program. Any record get request, such as a read or chain, is intercepted by a special record management program. Locate: How is the record found? A simple search through the members in the list. Search key fields in the main file. Search through the special index. Search path: Which members are searched? Start at the online member, and progress offline until it is found. Follow the list. Only the main file (online). Only the special index. Retrieve: How is the record brought back? All searched members are retrieved until the required record is online. If a record does not exist, all searched members are retrieved until the required record is online. Only the member containing the record required is retrieved. Read: How is the record read? Read directly from the retrieved member. Read directly from the retrieved member. Read directly from the retrieved member. Update: How is the record updated? Added to the main file and updated. Copied to main file (add extra data in other record formats) and updated. Is added to the main file, updated, and index modified. Result: What happens to the record in archived member after update? Deleted Deleted Deleted Migration: What happens to the retrieved member in the main file? Archived if it is updated, and deleted if it is read only. Archived if it is updated, and deleted if it is read only. Archived if it is updated, and deleted if it is read only 258 Backup Recovery and Media Services for OS/400 The first column in Table 5 represents action items involved in a retrieval situation. The remaining three columns show the various approaches that are taken when you use one of the three methods of searching for records described in 12.17.1, “Moving records to an archive file member” on page 254. Other points to note are: • The table assumes that the record required is eventually found to have been archived. The two options of read only and update for that record are then investigated. • Record positioning commands, such as the RPG/400 SETLL command, cannot be expected to work on a differing key than that which controls the archiving, unless the positioning is in some way restricted such as online records only. • It is possible that a retrieved record is validly re-archived immediately. In this case, an update to a record should be reflected in the archived or retrieved member and not in the main file. • If the retrieved member is not re-archived immediately, the special record management program can mark this member as online and automatically include it in the online part of the search until it becomes archived again. • A further option for the retrieved member is to be integrated in full, back into the main file, and then deleted in its entirety. 12.17.3 Application changes The following application changes are required to manage pseudo record-level archive requests for data: • Divert all record fetch, update, and add requests to a special record management program that uses the member list to search for the record. Database triggers may be an efficient way to implement this. • Possibly add logic to decide whether to access the archived records rather than letting a user decide. And that's it! But remember, all of the work is in customizing the solution, involving designing the record handling program, creating archive members and indexes, and all of the other things previously mentioned. 12.17.4 Running queries over archived records We touched several times on the idea of setting boundaries for record search or usage. This applies particularly to the running of query type applications over a file. You must specify up front which records are to be used for the query to gain sensible and satisfactory results. The types of options that you may consider are: • *ONLINE: Search records on disk only. • *ALL: Search every record (including all archived ones). Note: The headings refer to method of dealing with records in the main file that have been copied to an archived member. Action to be taken Records are deleted from the main file Records are saved with files freed Homemade access path Chapter 12. Planning for the hierarchical storage management archiving solution 259 • Date range: Search for records in the specified range only. • Member: Search up to this member in the chain. • Member *ONLY: Search only this member. • *ARCHIVED: Search only archived records. Suitable processing to retrieve the correct records must be performed before running the query. You need to create your own programs to achieve this. If you do not know the status of your records (that is, those that are archived), it may not be appropriate to use the *ONLINE or *ARCHIVED options. You may need access to all records that are changed or created since a certain date. This requires individual time stamping of every record in the database and indexing based on that time stamp. The retrieval of archived records requires merging some records back into the main file. You may choose whether you merge complete members or select specific records to merge. Either way, you must avoid duplicating records and use the following routine for all of the records accessed: 1. Select a record. 2. Copy the record to a main file. 3. Delete the record from the archive file. Needless to say, if you find that merging archived members is happening frequently (perhaps global queries are needed quite often), you must question the validity of archiving them in the first place. Perhaps one unexplored method of performing queries over large amounts of archived data is that of direct tape input/output (I/O). Because of the read-only nature of queries, this may be an excellent approach. See 12.17.5.1, “Using direct tape I/O” on page 260, for more details. Of course, if you perform a query over archived records only (*ARCHIVED), there is no need to merge the records into the main file. You may run the query directly over the retrieved member and re-archive it. 12.17.5 Time stamping every record It has been discussed that for effective record level archiving, record time stamping is essential. This involves the addition of a time/date field to the record format. The time-stamp field may be updated every time the record is updated, read, or both, and used to establish a selection criteria for moving certain records into a file member ready for archiving. You need to make the following changes: 1. Update the record format for the file to include a date/time stamp field. 2. Add logic to the application code to stamp each record as it is used. 3. Set up security so that only the approved applications with time-stamping logic are allowed to access the file. 4. Recompile programs and files. You may be able to construct a database request intercept program (perhaps using triggers) that replaces your normal database record fetches for your application files. This program can handle the time stamping of all of the records. 260 Backup Recovery and Media Services for OS/400 12.17.5.1 Using direct tape I/O Direct tape input/output (I/O) is where a user program writes records directly to tape or reads them directly from tape. No save or restore CL commands are used. This is different from Dynamic Retrieval, which uses a basic save/restore interface. Direct tape I/O may be an alternative solution for read-intensive operations on archived data. You can use a tape file to write records directly to tape. The record key must be set before you write the block of data, and it is not possible to insert or add records to the initial block. However, with the use of the BRMS/400 fast search facility for tape drives, such as the 3490, 3590, and 3570, you may be able to locate the beginning of the tape file quickly and perform intensive read only sequential operations such as running a query. The BRMS/400 inventory stores the starting block ID on tape of a tape file and fast forwards the tape directly to that position instead of searching sequentially through the entire tape. Note: This does not mean Query/400. Tape file I/O means a program reading from or writing sequentially to tape. The records must be read and a *FILE object created from them before Query/400 is used over them. The fast search implementation is part of the BRMS/400 product. If you want to take advantage of this facility, you must perform all of your tape input and output under BRMS/400 control. To do this, you need to use the Set Media using BRM (SETMEDBRM) command. For further information, refer to Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). © Copyright IBM Corp. 1997, 2001 261 Chapter 13. Practical implementation of hierarchical storage management archiving capabilities This chapter deals with the what and how of Dynamic Retrieval with BRMS/400. The first part discusses the types of data that you may consider for archiving. The second part contains details on how to set up and use the BRMS/400 functions, groups, and policies provided to enable Dynamic Retrieval. 13.1 What to archive This section lists some of the types of data that you may find useful to archive with retrieval and tabulates suggested implementations for some of the discussed cases. 13.1.1 Types of objects to archive for Dynamic Retrieval We already stated that the support for Dynamic Retrieval covers file members only. We also discussed a handful of cases where certain types of file members (or objects that can be in some way copied or converted to file members) are suitable for archive and retrieval. We attempted to classify a type of file member by the function or purpose it serves, for example, the type of application that uses it and what the application does with it. This is not intended as an exhaustive list. Neither is it intended to be a list of the only officially supported cases. It is a starting point for discussion that may assist you in the design of a working implementation. 13.1.1.1 Output file data This is a simple isolated file member that holds a set of records and has no relationship with any other files. Typically, it is a file created by running an OS/400 command with output file support. The results are used once or twice, and the file is forgotten. These file members may sometimes be significant in size. 13.1.1.2 Temporary data files This is an isolated data file that has been created for a one-time task and is forgotten. The original use may have been anything from copying some records to a backup file during application testing to using it as a file transfer recipient for downloading to a PC. 13.1.1.3 Disused test data When a new application is implemented on a system or a new release is installed, we often witness extensive testing of the new or updated package. Part of this testing involves creating special test environments with non-critical test data residing in test libraries. When testing is complete, the application typically moves as a whole to the production environment, leaving the test data dormant. It may be useful to remove the test environment from the system with archiving, but save the effort of re-creating it by retrieving it. Certain fragments of that test environment may also be usefully retrieved if it is found necessary to re-test certain components of the package following the application of a fix, for example. 262 Backup Recovery and Media Services for OS/400 13.1.1.4 Data file with transaction-based members A transaction-based application (for example, telephone sales recording of orders) amasses records as time goes by. These records become dormant at a certain point in time (for example, when the order has been fulfilled and payment is received). The records, therefore, become historical data for auditing purposes. There may even be a second level of dormancy once some period of auditing is over (for example, at the close of the financial year). At each stage, the status of the data changes. A business decision based on knowledge of the volatility of the data at the various stages and access requirements must be made as to when the data becomes dormant. If the application uses different members using a time-based key to allocate records to a file, this is a particularly suitable package for this implementation of hierarchical storage management. 13.1.1.5 Data file with transaction-based records If an application is transaction based but does not use a time-based division of records among separate file members, a different approach must be taken. Archiving can only be performed at the file member level. Thus, when records at different stages in their application life share the same file member, you must develop a more sophisticated solution. Further considerations with record-level versus member-level archiving is found in 12.16, “Application design considerations” on page 245. If you implement pseudo record-level archiving, you must synchronize your record movement activity (run by a separate program) with your archiving activity (run by BRMS/400). See 12.17, “Pseudo record-level archiving” on page 254. 13.1.1.6 Statistically random access data files An application with more random access characteristics, when referring to the age of the records, demonstrates much different file access profiles than that of a transaction-based one (for example, an expert system to diagnose medical conditions). Typically, much of the data may already be collected and arranged before the application was started. There may be a core of data frequently accessed such as the symptoms of the more common ailments and a much larger set of infrequently used data. A large amount of the data may never be updated. For example, many ailments have a stable set of symptoms. There is a great opportunity here to amass an extremely large “dictionary” of information. Before you begin the process of archiving, you may need to break the data files down into members (horizontal data splitting) according to typical usage characteristics. This involves a certain amount of statistical analysis of each record and its subsequent placing in a file member dependent on its anticipated usage. You have to place frequently accessed data into the frequently accessed members. You place the infrequently accessed data into the infrequently accessed members and archive the infrequently accessed members. If the application manages this, it is suitable for archive with Dynamic Retrieval. 13.1.1.7 Random access based record access For random access file activity when the file members are not arranged suitably or where the application does not manage the suitable use of file members, you need to implement pseudo record-level archiving and synchronize the record movement with the archiving. See 12.17, “Pseudo record-level archiving” on page 254, for more information. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 263 13.1.1.8 Mixed characteristic data files Many applications have a mixed environment of file usage characteristics. A warehousing application may have a parts list with fairly low volatility since the parts stored may not change frequently. However, the stock level entity is highly volatile as new stock arrives and items are sent for delivery every day. Moreover, the parts inventory may have random access characteristics with respect to record age, but the parts order file may have lots of transaction-based activity. Archiving with Dynamic Retrieval should be approached on a file-by-file basis for such mixed characteristic applications. 13.1.1.9 Source file members Source file members are characterized by their contents. Typically, this is data that is updated by programmers and system designers and read for program or database compilations. They tend to be of fixed flat record format. They are different from data file members: • Normal business applications do not directly use (open, read, update) source file members. Application development tools (which are applications in their own right) may be using source files for editing or compilation, but the usage patterns differ greatly from data files. • It is less likely that queries are run over source file members. • Access is typically interactive for update (editing) and batch for read only (compile). Source physical file members offer great potential for archiving with Dynamic Retrieval because they are typically not used for long periods of time and are split into separate file members with each member having its own independent usage statistics and age. 13.1.1.10 Digital libraries With the advent of hierarchical storage management on the AS/400 system, you begin to see the emergence of true library type applications. You can store large amounts of information with the understanding that any small portion of your library's data is required at any time, but never the entire data at once. Examples of such applications include patient health records, finger-print files, accounting history, cartographic databases, and so on. These applications should be designed from scratch to capitalize on the use of Dynamic Retrieval. Typically, you may expect to see information broken down into libraries, cases, shelves, books, chapters, topics, sub-topics, and paragraphs. Depending on the typical size of any one of these divisions, you may see chapters, topics, or even sub-topics stored in their own file members. 13.1.1.11 Historical data Some applications may need to keep certain data for extended periods of time. This is often a legal requirement (for example, in accounting applications, you may be required to keep all basic accounting data for at least seven years). It is not usually a business requirement to have all this historical data online. Moving the data to microfiche or some other means of mass storage may be inappropriate because of possible access requirements or even expense. 264 Backup Recovery and Media Services for OS/400 If the historical data is grouped in clearly-defined sets, and if these sets are correspondingly arranged in different file members, the Dynamic Retrieval function may be used to good effect. In this case, the application is responsible for “off-loading” the historical data into historical file members and the subsequent management of these file members. An example of such an application could be year-to-year accounts. 13.1.1.12 Active data sets There are cases where specific data sets are part of a regularly used application function and do not appear to require archiving. However, it may require a detailed understanding of the application functional structure to predict the activity levels of each individual file member. It is possible that a global inclusion of all of the data sets in a BRMS/400 archive list is feasible as the truly active file members never become archived. This allows for archiving some of the less active parts of the data structure. Use care when setting the dormancy levels for the archive group to avoid repetitive archive and retrieval cycles. The impact of retrieval delays should be studied in detail. If the application function is of a highly performance critical nature, the gains derived from archiving are outweighed by the performance hit of a single retrieval operation. The choice of retrieve mode is also critical. See 12.7, “Retrieval considerations” on page 229, for more information. 13.2 Suggested implementations of Dynamic Retrieval When you are planning to archive data files, you must remember that the support for Dynamic Retrieval is based at the file member level. The main points to consider include: • How critical is the file member data to the successful operation of the business? • What are the legal requirements for retention of the data in the file member? • What is the size of the file member? • How frequently is it accessed? • What is the nature of the application (or applications) that uses (or use) this file member? • What is the impact to the application of having to retrieve a file member from tape? • What is the restore time for this file member? • What type of access is required: read, update, add? • How long a period of inactivity is regarded as sufficient to mark this file member as dormant? • How long should the file member be kept (how long to keep the tape copy)? • What security is required for the tape copy of the file member? • What backup (duplication) of the file member tape copy is necessary? Tabulating the answers to the preceding questions is the first step in establishing the optimal system setup and BRMS/400 configuration. Remember that BRMS/400 supports a hierarchical policy structure, which therefore, allows you to Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 265 set global control group attributes through the System Policy and the Archive Policy. You may want to establish a set of most commonly-used attributes and set the policies with these attributes with corresponding overrides of these values at the control group level. For example, you may decide that the most commonly-used dormancy criteria is that of one year. Set this in the Archive Policy. Then, for every control group that requires a different value, override the value in the control group's attributes. The file members may be grouped according to common archive or retrieve characteristics and entered into archive lists within BRMS/400. These lists may themselves be grouped according to common parameters related to the method of archiving them and entered into control groups within BRMS/400. The control group parameters are tailored and the groups scheduled to run on a regular basis. You can find more details on how to set up the BRMS/400 configuration in 13.3, “Using BRMS/400 for hierarchical storage management” on page 267. The preceding structure should be used at all times when planning your implementation of hierarchical storage management. You should examine each data set in detail and derive the most appropriate settings for your BRMS/400 setup. Take into account: • Overall business objectives • Individual “user class” requirements • Agreed service levels • Application design constraints • System constraints • BRMS/400 function Table 6 on page 266 lists the data set types previously mentioned and suggestions for the necessary design points. A more prudent approach may be to use the most safe value for the Archive Policy. In this example, you may have a dormancy period of five years set in the Archive Policy. That way, when you create your Archive Control Groups, if you forget to override the dormancy value, the impact is limited. In this case, you do not suddenly archive every object in the archive control group because the Archive Policy dormancy is set to one day. Note This table is, by no means, exhaustive or conclusive. It is meant as a reasonable test and nothing more. You should always plan your particular implementation thoroughly, examining each data set individually. A keen understanding of your own business and the applications that you use is vital and should be exploited fully. Note 266 Backup Recovery and Media Services for OS/400 Table 6. BRMS/400 Dynamic Retrieval guidelines Data type Typical application or use Business criticality (H/M/L) Dormancy level for archive qualifications Retention period for tape copy Retrieve mode Notes Outfile Data Query the results of CL commands. L 3 months 1 year *DELAY Some applications may use outfile support. Do not include these files in your archive lists. The application should manage them independently. If archived, they should be part of application archive. Temporary Data Files Temporary backup, record copies, file transfer, and so on. L 1 month 3 years *DELAY Retrieval of certain portions should be useful. Data File with Transaction Based Members Typically order entry, accounting sales analysis, and so on. M 1 to 3 years 5 years and up: Depends on business or legal requirements *VERIFY (small) or *SBMJOB (large) *SMBJOB may actually improve performance in some cases. Data File with Transaction Based records Typically order entry, accounting sales analysis, and so on. H Immediate 5 years and up: Depends on business or legal requirements *VERIFY The archive should be performed immediately after the movement of records to the special archive member. Using the *SBMJOB retrieve mode may actually improve performance in some cases. Data File with Random Access Members Typically stock control, medical analysis, customer files, and so on. H 1 to 3 years 5 years and up: Depends on business or legal requirements *VERIFY (large) or *NOTIFY (small) *SBMJOB may actually improve performance in some cases. Data File with Random Access Records Typically stock control, medical analysis, customer files, and so on. H Immediate 5 years and up: Depends on business or legal requirements *VERIFY The archive should be performed immediately after the movement of records to the special archive member. Using the *SBMJOB retrieve mode may actually improve performance in some cases. Data File with Mixed Characteristics Example: Customer order application: generating order is transaction, fetch customer data is random access. H 1 to 3 years 5 years and up: Depends on business or legal requirements *NOTIFY (performance critical) You may even analyze each individual data set within the application and allocate separate archiving and retrieval conditions for each one. Source file members Source editing applications and compilers. M 1 year 5 to 10 years *SBMJOB The value of the intellectual property within the source files must not be lost by discarding the archive copy early. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 267 13.3 Using BRMS/400 for hierarchical storage management This section provides an overview on the basic functions needed to set up an implementation of hierarchical storage management with BRMS/400. Where appropriate, we give detailed specific actions to take with the BRMS/400 product. For a full understanding of BRMS/400 and how this particular topic fits in to the overall BRMS/400 structure, you must be familiar with the contents of Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). 13.3.1 Review of the BRMS/400 structure The parts of BRMS/400 that you need to use to set up a working Dynamic Retrieval environment are: • Archive Lists: WRKLBRM *ARC • Media Classes: WRKCLSBRM *MED • Move Policies: WRKPCYBRM *MOV • Media Policies: WRKPCYBRM *MED • Control Groups: WRKCTLGBRM *ARC • Archive Policies: WRKPCYBRM *ARC • Retrieve Policies: WRKPCYBRM *RTV • Job Scheduling: WRKJOBSCDE • BRMS/400 Logs: DSPLOGBRM *RTV • Resume Retrieve: RSMRTVBRM • Set Retrieve: SETRTVBRM • Set Media: SETMEDBRM See Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) if you need additional information on the commands and the parameters. The process we adopted is outlined here: 1. Identify data sets suitable for archiving. 2. Establish suitable archive criteria for each data set. Digital Libraries Reference information applications H 2 weeks 10 years and up *NOTIFY Typically small chunks of the massive library are retrieved, and therefore, *NOTIFY is probably appropriate. Historical Data Transactionbased applications H Immediate (coincide with period end) 5 years and up: Depends on business or legal requirements *VERIFY Archive should take place as soon as the historical data is placed into the archive members. Active Data Sets Any business application H 18 months 10 years and up *NOTIFY 18 months dormancy is a reasonable period to determine if data needs to be archived since it was last used. Data type Typical application or use Business criticality (H/M/L) Dormancy level for archive qualifications Retention period for tape copy Retrieve mode Notes 268 Backup Recovery and Media Services for OS/400 3. Group the data sets by common characteristics. 4. Identify the jobs associated with the various data sets. 5. Build BRMS/400 archive lists (one for each group or sub-group of data sets). 6. Incorporate the archive lists into control groups. 7. Create any required special media classes for archive. 8. Create any required special media movement policies for archive. 9. Create media policies for each group of data sets. 10.Establish an archive policy. 11.Build an archive control group for each group of data sets. 12.Adjust each archive control group's attributes where they vary from the archive policy. 13.Establish a retrieve policy. 14.Build alternative retrieve policy settings for any jobs that require different retrieve modes. These can be implemented using the SETRTVBRM command in the user's initial program. 15.Add the archive jobs to the scheduler (if required). Ideas and suggestions for the first four steps are included in the previous sections of this publication. This chapter deals with the practicalities of setting up BRMS/400. This includes the remaining steps. We also deal with: • Checking the BRMS/400 logs for results from the archive run. • Controlling retrieve operations from the RSMRTVBRM display. 13.4 Setting up BRMS/400 for archive with Dynamic Retrieval This section deals with the practicalities of setting up the necessary archiving details in BRMS/400 to implement Dynamic Retrieval. The next section deals with the retrieval part. 13.4.1 Archive lists The first part of archiving for Dynamic Retrieval is the identification of candidate file members that you want to archive. This is done at four levels, depending on how granular you want to be. The four levels are: • ASP level • Library (or generic library) level • Object (*FILE) level • Member level The ASP and library levels give you the opportunity to specify a large number of candidate file members without much typing. You can specify an ASP number as a special value in the archive control group (for example, *ASP03 or a library name such as PAY*). When specifying these values, BRMS/400 checks all of the file members in ASP 3 or in all of the libraries beginning with the characters “PAY”. However, it is rare that you want to actually archive at these levels. They are useful in producing Archive Candidate reports to assist in estimating potential space savings. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 269 Typically, an object and member level archive candidate selection is most often used. With these, you must group these candidate file members into lists of objects with something in common. You may make good use of the generic selection facilities within BRMS/400 if, for example, the file members are all in the same file. Each list of objects may be entered into a BRMS/400 archive list. These lists are placed into a BRMS/400 archive control group. It is the control group that specifies the archiving parameters such as dormancy levels and media policies. Therefore, it is feasible to split groups of objects that currently have similar archive or retrieve characteristics into several individual archive lists to allow additional flexibility for future changes. To work with BRMS/400 lists, enter the WRKLBRM command. You must create your archive list with *ARC for the Use value and *OBJ for the Type value. You can also select *FLR to archive folders or *SPL to archive spooled files. The Add Object List display is shown where you may proceed to enter the objects required for the archive. Simply type in a sequence number, library name, object name and type, and whether this is an include or exclude selection as shown in Figure 150. The archive processing takes place in the order shown on the display. Figure 150. Adding objects to an archive list When you complete your archive list, press the Enter key once more to save the list. If you press F3 or F12, you lose all of the changes you just entered! Continue to create as many lists as you need. You use these lists later in the various archive control groups that you create. Be aware that neither folders or spooled files currently have Dynamic Retrieval support. Important Add Object List SYSTEM04 Use . . . . . . . . . : *ARC List name . . . . . . . ARCLIST Text . . . . . . . . . Main List for Archive with Retrieval Type choices, press Enter. Selection Seq Library Object Type *INC/*EXC 60 KARY SOURCE *FILE *EXC 10 QGPL *ALL *FILE *INC 20 RBOWEN MYFILE *FILE *INC 30 KARY *ALL *ALL *INC 40 RBOWEN BIGFILE *FILE *INC 50 RBOWEN SOURCE *FILE *INC 270 Backup Recovery and Media Services for OS/400 13.5 Media classes for archive In general, you do not need to create separate media classes for archive. You only need to create them if you want to use different tape drives for archiving, or use a different tape format, or not even share your archive tapes with other systems or backup jobs. Use the WRKCLSBRM *MED command to create a new media class, and change the necessary parameters in the Add Media Class display shown in Figure 151. Figure 151. Add Media Class display 13.5.1 Move policies for archive You undoubtedly want to create special move policies for your archive-with-retrieve tapes. It is unlikely that these tapes can cycle through the locations in the same way as regular save tapes or even non-retrieve type archive tapes. You may need to create several move policies. We recommend at least two move policies. The first move policy should be used for the “active” tapes that contain the “active” data that has been archived and can be required for retrieval at any moment. This set is the copy of the original archive set. Because the duplicate set was created after the original set of tapes, BRMS/400 regards these tapes as the If you want to include a complete library of objects as candidates, it is easier to enter that library name as an entry in an archive control group rather than including it in an archive list. Either way, for retrieval purposes, remember that if you include generic values, or even whole libraries or ASPs, you can easily archive object types such as programs, logical files, journal receivers, and so on that are not supported for Dynamic Retrieval. These objects may archive without problems. However, you do not realize what you have done until it becomes time to retrieve an unsupported object, at which time the system simply reportes that it is saved with storage freed and the user program ends in error. As mentioned earlier, the ASP and library-level values are good for creating Archive Candidate reports. Note Add Media Class Type choices, press Enter. Media class . . . . . . . . . . . . ARCMED Name Density . . . . . . . . . . . . . . *FMT3490E *FMT3480, F4 for list Media capacity . . . . . . . . . . *DENSITY *DENSITY, Number nnnnn.nn Unit of measure . . . . . . . . . _ 1=KB, 2=MB, 3=GB Mark for label print . . . . . . . *NONE *NONE, *MOVE, *WRITE Label size . . . . . . . . . . . . 1 1=6 LPI, 2=8 LPI, 3=9 LPI Label output queue . . . . . . . . *SYSPCY Name, *SYSPCY, *PRTF Library . . . . . . . . . . . . . __________ Name, *LIBL Shared media . . . . . . . . . . . *YES *YES, *NO Text . . . . . . . . . . . . . . . Special archive media Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 271 latest versions. You must use the duplicate set as the “active” set and retain them close to the drives, for example, within your tape library device or a tape rack. This move policy should keep the tapes close at hand, that is, near to the tape drives that are used for retrieve operations. If you have an automated tape library device, you may even set the move policy so that the tapes remain in the library at all times. You must try not to compromise the security of these tapes against accessibility. Remember that leaving them out in open racks exposes live business data to potential theft and to damage from accidents, fires, floods, and so on. The second move policy should be for the original tapes that we recommend you make a duplicate of after every archive operation. You should send these tapes immediately to a different site from the site in which the “active” archive copies are stored. You may repeat this pairing of move policies for each different physical location in which archive tapes are likely to be stored. To create a move policy, use the WRKPCYBRM *MOV command, and create a move policy such as ARCMOV. Define the sequence and duration of all of the locations to be visited on the Create Move Policy display shown in Figure 152. Figure 152. Create Move Policy display 13.5.2 Archive media policies You need to create a separate archive policy for every different retention period you want to use for your archived data. The retention period indicates how long a piece of data on a tape should stay active from the day that the data was written to that tape. You can find approximate reasonable-test figures for this value in Table 6 on page 266. When the retention period is exceeded, the data on the tape is expired; the tape becomes a scratch tape and is available for any new backup or archive operation. You may set retention by the number of elapsed days, the number of versions to keep, permanent (keep data for ever), or for a fixed date. We make the following recommendations for these options: Create Move Policy SYSTEM04 Move policy . . . . . . . . . . ARCMOV Home location . . . . . . . . . *SYSPCY Name, *SYSPCY, F4 for list Use container . . . . . . . . . *NO *YES, *NO Confirm moves . . . . . . . . . *YES *YES, *NO Calendar for working days . . . *ALLDAYS Name, *ALLDAYS, F4 for list Calendar for move days . . . . . *ALLDAYS Name, *ALLDAYS, F4 for list Text . . . . . . . . . . . . . . Archive tapes only Type choices, press Enter. Seq Location Duration 40 RACKS *EXP 10 ATL01 180 20 OFFSITE 180 30 VAULT 30 272 Backup Recovery and Media Services for OS/400 • Permanent: This is generally unsuitable because if a data item is retrieved, it may be changed and re-archived at a later date. Therefore, the original version is now redundant but continues to occupy its tape forever. However, since version control at the member level is not currently supported, the permanent option may be the only way to keep your archived data indefinitely. If you implement this, be aware that every time you archive a data item with permanent retention, you say good-bye to that tape forever. It is unlikely that you are in a position to manually expire the tape because you have no idea whether the items archived to it have since been retrieved. • Date: It is unlikely that you need to discard all of the archived data on a certain date, especially if you have no idea when it may be retrieved. Use this option with caution. If you specify an expiration date, you may need to modify existing policies or create new policies and control groups as time passes. An example may be a tax analysis application where the data must be kept for seven years or more, regardless of the number of times that this data was retrieved. Therefore, you may have a control group for 1994 (called “TAX94”) that listed the 1994 tax files and a 1994 media policy (called “TAX94”) that expires all of the data on January 1, 2002 (seven years after the end of 1994). Every time a 1994 tax file was retrieved and subsequently re-archived, the expiration is always in January 2002. The consequence of this system is that you must create new archive control groups (and possibly new archive lists) and new media policies each year (but you should only need seven of each). • Versions: Version control is currently unsuitable for archiving because the versioning algorithm works at a tape file level as opposed to a file member level. Consequently, you may have several file members archived in a single save with storage freed operation, producing a single tape file with a label based on the name of the data file. When a group of different file members within the same data file are subsequently archived, you create what appears to be a second version of the same file members, but it is not. Currently, BRMS/400 prevents us from using versioning with archive control groups. If version control for archive at a member level ever become available, it will offer an elegant way of permanently keeping your archive data on tape but expiring older versions of members that have since been retrieved and re-archived to another tape. • Days: The number of days elapsed is by far the simplest method for retaining our archived data tapes. The retention period is important. If you set it too short, you may completely lose important data far too soon. Expiration of a tape with archived data is effectively deleting that data. If you set the period too long, you experience a degree of data fragmentation on your tapes. Every time an object is retrieved, it is restored from tape and used on the system. At a later date, it is archived again and the original copy becomes redundant or “expired”. For a tape that contains several archived objects, as time passes, more and more of the archived objects “expire”. This is all wasted space on the tape and can only be reclaimed when the entire tape is expired by BRMS/400. In extreme cases, all of the archived data might have become redundant (and, therefore, the entire tape is redundant), and the tape is still not expired (and, therefore, re-usable) by BRMS/400. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 273 In the media policy, you also establish the link to the media class that you want to use (possibly created in 13.5, “Media classes for archive” on page 270) and the move policy that is appropriate (created in 13.5.1, “Move policies for archive” on page 270). If you are using an automated tape library, you may also specify the name of the required library location in the Storage location parameter when you create a media policy. This helps BRMS/400 select the correct tape drives to use when performing archive or retrieve operations. Use the WRKPCYBRM *MED command to create a media policy. On the Add Media Policy display, set the parameters according to the results of your implementation planning work completed in 13.1, “What to archive” on page 261. After you create the necessary media policies and archive lists, you can create the required archive policy and base your control groups on all three. 13.5.3 Archive policy The archive policy is the system-wide default set of controls that governs the behavior of all archive control groups when being executed. Any of the parameters in the archive policy can be overridden within each individual control group (using the control group's attributes), but the archive policy serves as a system standard and a sort-of “good practices” guide. You can access the archive policy directly by using the WRKPCYBRM *ARC command, and you can change the parameters to meet your requirements. A full functional description of all of the parameters shown is available in Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171). However, for the purpose of Dynamic Retrieval, we take particular note of the Include, Save access paths, and Default weekly activity parameters. 13.5.3.1 Inactivity limit This is the dormancy criteria that is used to qualify each object for inclusion in the archive. It is specified as the number of elapsed days since the object was last used or updated, whichever is the more recent. This is the column headed “Dormancy Level for Archive Qualification” in Table 6 on page 266. 13.5.3.2 Archive date for *FILE objects For file objects, you may be more specific about the date that you want to use for dormancy qualification. This parameter allows you to specifically only use either The following recommendations are intended for all archive control groups that archive objects for use with Dynamic Retrieval. If you do not perform any other kind of archiving, it may be appropriate to follow these recommendations for the archive policy and allow all of your archive control groups to default to the archive policy. If non-retrieve archive control groups also exist, you use take care with this approach. It may be best to specifically set all of these parameters within each archive control group and leave the archive policy as general as possible. Note 274 Backup Recovery and Media Services for OS/400 the last used date or the last changed date as the checkpoint for the dormancy duration. BRMS/400 may still use both as described in the previous paragraph. This facility allows you to, at the control group override level, have two control groups with the same file objects listed, but different dormancy criteria for the last update and last used. Using this approach, you may want to set a longer period to wait for archive if the file has recently been updated, and a shorter wait until archive if the file has only been read. You can also use this approach to differentiate between active data files (recently changed) and files that have simply been retrieved for a small number of reads (recently used), although this is not a watertight assumption to make. 13.5.3.3 Objects able to be freed To use the BRMS/400 Dynamic Retrieval, function you must specify *YES to this parameter to enable saving the objects with storage freed. 13.5.3.4 Retain object description Again, you must specify *YES to this parameter if you want to use the Dynamic Retrieval as this is the parameter that initiates the save with storage freed operation. If *NO is specified, the object description is deleted after it has been saved to tape. 13.5.3.5 Objects not able to be freed We advise you specify *NO for this parameter because this ensures that it is less likely that you will archive an object that cannot be retrieved dynamically. But remember that there still may be a significant number of objects that can be saved with storage freed but are not supported by the BRMS/400 Dynamic Retrieval function. 13.5.3.6 Save access paths We recommend that you specify *YES to this parameter for all objects that may be retrieved. This is so that the performance of the retrieve operation, especially if it is in the *NOTIFY or *VERIFY mode, is not inhibited by a lengthy access path rebuild phase. 13.5.3.7 Default weekly activity The Default weekly activity parameter controls the days on which archiving may take place. Enter an asterisk (*), or whichever character you defined if you tailored the BRMS/400 presentation controls, in the days that you want an archive to run. You may choose to run the archives less frequently than your backups to increase availability of the system or to increase the amount of data that is likely to be archived in one operation. You do not necessarily need the system in a quiesced state to perform archiving. By definition, the objects to be archived are not in use, but there may be special conditions which apply. For example, immediate archive or an exclusive lock on the entire library prevents objects from archive. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 275 Our main recommendation for your archive activity plan is that you start archive immediately after your backups have completed. This minimizes the impact of archive media errors because a backup copy of the data exists. You can also use the DUPMEDBRM command to create duplicates of your archived media for safety reasons. 13.6 Archive control groups Having created your archive lists, you are in a position to build archive control groups. You have set your archive policy to reflect the most desirable run-time options for all of your archive control groups. If the only archiving you are performing is archiving for Dynamic Retrieval, it is possible that you have set the archive policy to be most suitable for all of your archive control groups. Use the WRKCTLGBRM *ARC command to create an archive control group. Specify the names of the archive lists that you created earlier, or enter the names of the libraries that you regard as suitable candidates for archiving. An example is shown in Figure 153. Figure 153. Edit Archive Control Group Entries display Note that F19 key allows you to display a list of libraries to choose from. In each case, you are simply listing available candidates for archive. The selection of objects that actually are archived is performed at run time using the list of archive candidates and the Include Criteria specified in the control group attributes or the archive policy. You may also want to specify a different archive weekly activity for each library or list at this point. If you leave this field, it defaults to *DFTACT, and the activity is taken from the default activity, which is a control group attribute, and can also be The more data that is archived in one run (and, therefore, on a single set of tapes), fragmentation is more likely to occur. If you keep archiving frequently, you only have a few objects on each tape. This, in turn, could lead to a great deal of tape space being wasted. Note Edit Archive Control Group Entries SYSTEM04 Group . . . . . . . . . . : MAINARC Default activity . . . . . *ARCPCY Text . . . . . . . . . . . Archive Control Group Type information, press Enter. Weekly Save Archive List Activity While Seq Items Type SMTWTFS Active 60 *ASP03 ____ 10 ARCLIST *OBJ *DFTACT *NO 20 SALESLIST *OBJ *DFTACT *NO 30 MYLIB ____ * * *NO 40 STOCKLIST *OBJ * * * * *NO 50 ARC2LIST *OBJ *DFTACT *NO 276 Backup Recovery and Media Services for OS/400 entered at the top of the display. Remember that each of the control group's attributes may also default to the archive policy, and some of these may also default to the system policy. After creating the archive control group, you can change any of the attributes such as tape device to use or the dormancy criteria. Refer to Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) if you need more information on individual parameters. There is also a discussion on some of the more relevant parameters in 13.5.3, “Archive policy” on page 273. You may also change the list of subsystems to end if there are some jobs that interfere with the archive processing. At the end of the archive run, the subsystems are automatically re-started. Similarly, you can also specify a list of job queues to be held during the processing of the archive control group. As a final check, you may use the Start Archive using BRM (STRARCBRM) command with the *REPORT option to print a report a report of the available candidates for archiving within the control group. The report may not be printed if none of the files in your lists are dormant yet. In this case, you receive a message saying that the report did not contain any data. 13.6.1 Scheduling the archive The easiest way to schedule running an archive control group is to use option 6 from the Work with Archive Control Groups to schedule the archive job as shown in Figure 154. Figure 154. Add Job Schedule Entry display By default, option 6 invokes the standard OS/400 job scheduler. You can set the archive job to run daily, several times a week, weekly, or monthly. To specify daily, set the Frequency parameter to *WEEKLY and the Schedule Day parameter to *ALL. The other combinations are self explanatory. We advise you to set the time for the run to start immediately after the backup runs have completed. You can achieve this by choosing a time when you are confident that the backup run has shut the system down to a restricted state; Add Job Schedule Entry (ADDJOBSCDE) Type choices, press Enter. Job name . . . . . . . . . . . . > QBRMARC Name, *JOBD Command to run . . . . . . . . . > STRARCBRM CTLGRP(*ARCGRP) OPTION(*ARCHIVE) S BMJOB(*NO) ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ Frequency . . . . . . . . . . . > *WEEKLY *ONCE, *WEEKLY, *MONTHLY Schedule date, or . . . . . . . > *NONE Date, *CURRENT, *MONTHSTR... Schedule day . . . . . . . . . . > *ALL *NONE, *ALL, *MON, *TUE... + for more values Schedule time . . . . . . . . . > '00:01' Time, *CURRENT Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 277 therefore, the archive is initiated as soon as the backup has completed and the subsystems are brought back up. You may choose to simply queue the archive jobs behind the backup jobs in the same job queue. Be careful with this approach if you run multiple backup or archive jobs concurrently. You may even write a simple CL program that initiates the backups, and on completion, initiates the archive. This program may be scheduled to run within the OS/400 job scheduler instead of submitting each individual backup or archive job separately. Use the Work with Job Schedule Entries (WRKJOBSCDE) command or the Add Job Schedule Entry (ADDJOBSCDE) command to achieve this. You may want to use an alternative job scheduler (for example, one that has a dependency function built into the scheduling algorithm, perhaps using parent and child relationships). You can instruct BRMS/400 to use the scheduler of your choice with the CHGSCDBRM command. See Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for more details. 13.7 Using BRMS/400 for Dynamic Retrieval This topic describes the day-to-day activities that can help you control the BRMS/400 Dynamic Retrieval function to suit your needs. You can find details on what to do with retrieve policies in 12.7, “Retrieval considerations” on page 229. This section concentrates on how to retrieve your data dynamically. 13.7.1 Setting retrieve policies The key to retrieving your objects is the BRMS/400 retrieve operation. This operation is initiated once for each file member that has been opened and found to be saved with storage freed. The retrieve operation is guided by the retrieve policy within the BRMS/400 policy structure. This retrieve policy governs many attributes that control the way in which a retrieve is performed and it takes effect across the entire system. Obviously, not all applications want to retrieve their data in the same manner (or mode, as we call it). For additional flexibility, you may use the SETRTVBRM command to override the system wide retrieve policy settings at the job level. When you issue the SETRTVBRM command, the new settings apply for all retrieve operations initiated after that point and until the job ends or another SETRTVBRM command is issued. The SETRTVBRM command is explained further in 13.7.1.2, “Setting the retrieve controls for a particular job” on page 279. 13.7.1.1 Setting the retrieve policy The retrieve policy can be changed by using the WRKPCYBRM *RTV command. The Policy Administration (BRMPCY) menu also contains an option for the retrieve policy. The following options are supported for the change in the retrieve policy: • Media device: The device (or group of devices) used for the retrieve operation. *MEDCLS has the usual meaning and uses any device compatible with the format of the media on which the required data resides. This is called device pooling. This is the recommended value. You may choose a device name from a list of the currently available ones by pressing F4 with the cursor in this field. A display similar to the example in Figure 155 on page 278 is shown. 278 Backup Recovery and Media Services for OS/400 Figure 155. Selecting a device for your retrieve policy When media containing the file is located in a media library device, BRMS/400 limits its choice of *MEDCLS devices to those that are at the media library device location. • Retrieve confirmation: You may specify the retrieve confirmation (or retrieve mode) for batch and interactive jobs separately and independently. A full description of each of these parameters is found in 12.8, “Retrieval methods” on page 231. Further discussion on the best uses of each mode is found in 12.7, “Retrieval considerations” on page 229. • Retrieve authorization: The authority option tells BRMS/400 what level of authorization to a file is necessary before the accessing user can retrieve the file. This authorization level is checked and, if met, the retrieve operation is performed. If it is not met, the BRM1823 message is sent indicating that the file was not restored and that it cannot be used until restored. The unretrieved file is tracked by BRMS/400 to indicate that an *AUTHORITY failure occurred. Through the RSMRTVBRM command and the Resume Retrieve display, the user can easily identify files that were unable to be retrieved due to authority failures and can request that the retrieve operation for one or more of them is performed or cancelled. For many enterprises, users only have use or update authority to files. If the authority level is set at *UPD at open time, BRMS/400 automatically retrieves the archived file member for those users that have at least update authority to the file. In doing so, BRMS/400 allows the file to be retrieved without having to grant users who access the file *OBJEXIST authority to enable Dynamic Retrieval. Allowable values are a subset of those allowed on the OS/400 CHKOBJ command for the AUT parameter such as *OBJEXIST, *OBJMGT, *OBJOPR, *ADD, *DLT, *READ, *UPD, and *ALL. The default value is *OBJEXIST. Change Retrieve Policy SYSTEM04 Type choices, press Enter. Retrieve device . . . . . . . . . . . . *MEDCLS Name, F4 for list Retrieve confirmation: ....................................... Interactive operation . . . : Select Device : DELAY... Batch operation . . . . . . : : OTIFY Retrieve authorization . . . . : Type options, press Enter. : *UPD... End of tape option . . . . . . : 1=Select : UNLOAD Option . . . . . . . . . . . . : Opt Device : *FREE Data base member option . . . : _ *MEDCLS : , *OLD Allow object differences . . . : _ TAP02 : Object Retention . . . . . . . : _ TAP03 : : _ TAP04 : : _ TAP05 : : More... : : F9=Work with devices : : F12=Cancel : F3=Exit F4=Prompt F5=Ref : : F12=Cancel :.....................................: Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 279 You may decide to downgrade the user’s required authority level for a restore, for example, to *OBJOPR. This effectively grants a user limited existence or creation rights to certain objects under certain conditions when they previously were only able to use the object. There is an inverse relationship here. The lower you specify the required authority, the more authority you are effectively granting. • End of tape option: This is identical to the OS/400 standard save/restore end of tape option parameters. The default is *REWIND. If you are using an automated tape library or even a drive with an automatic cartridge loader, it increases the level of automation if you specify *UNLOAD, because this removes the current cartridge from the drive leaving it available for the next operation. • Option: This is the restore option. This option controls how BRMS/400 invokes the RSTOBJ command. The values supported are exactly the same as those for the RSTOBJ command's OPTION parameter such as *ALL, *NEW, *OLD, and *FREE. • Allow object differences: This parameter is used to indicate if object differences are to be tolerated during a restore operation. The values of *NONE and *ALL are supported and have exactly the same meaning as they have for the RSTOBJ command's ALWOBJDIF parameter. The default value is *NONE. • Object retention: This parameter was new in V3R6 for RISC systems and in V3R2 for CISC systems. The default for this parameter is to keep the retrieved object on the system for an indefinite period (*NOMAX). You can change the default and specify the number of days you want to keep the retrieved object on the system before it is deleted, provided it has not changed. At the end of the retention period, BRMS/400 maintenance job performs a save with storage freed to a temporary file and deletes the temporary file afterwards. If the object has changed, you have to follow the normal archive procedures to re-archive the updated object. It should be noted that all retrieve operations are constrained by the Storage Threshold (a high water mark for Auxiliary Storage Pool (ASP) utilization) as expressed through the System Service Tools (STRSST) ASP threshold. See Backup and Recovery - Advanced, SC41-4305, for more details. BRMS/400 does not restore a file if doing so causes the ASP's storage threshold to be exceeded. If the storage threshold were to be exceeded, messages are sent indicating that the file was not restored and that it cannot be used until restored. The unretrieved file is tracked by BRMS/400 to indicate that a *STORAGE failure occurred. Through the Resume Retrieve using BRM (RSMRTVBRM) command and the Resume Retrieve display, the user can easily identify files that were unable to be retrieved due to DASD space constraints and can request that the retrieve operation for one or more of them be performed or cancelled. 13.7.1.2 Setting the retrieve controls for a particular job You may want to override the values set by the Retrieve Policy (for the entire system) for a particular job. To do this, you must issue the Set Retrieve Controls for BRM (SETRTVBRM) command within the job that you require the override to take effect. 280 Backup Recovery and Media Services for OS/400 The controls you specify with the SETRTVBRM command remain in effect for your job until they are reset, for example, when the job ends, or otherwise is changed with another SETRTVBRM command. To see control values that are currently in effect, use the SETRTVBRM command. A display appears similar to the example in Figure 156. Figure 156. Set Retrieve Controls for BRM display The parameters shown are exactly the same as those found in the retrieve policy. You may review the descriptions listed in 12.8, “Retrieval methods” on page 231, for more information. The SETRTVBRM command can be inserted into the initial program for certain users or into the controlling CL program for your batch jobs. You may even build it into your application if you need to change retrieve modes depending on the functions being performed. You must be careful about allowing users to change their retrieval controls themselves. See 13.9.3, “Securing the retrieve policy” on page 287, for further discussion on this matter. 13.7.2 Responding to a retrieve operation This section contains details of the messages that you may see while witnessing a retrieve operation. It also includes any responses that you may need to give to inquiry messages received as part of that retrieve operation. 13.7.2.1 *VERIFY By default, a program message is displayed to the user as shown in Figure 157 through Figure 159. The first display is shown, and the Additional Message Information display is only shown if the user presses the Help key. The more knowledgeable user becomes familiar with the options and most often makes a decision from only the Display Program Messages display. Important additional information is included in the second display including object size and ASP utilization, which may influence the user's decision to initiate the retrieve immediately. Set Retrieve Controls for BRM (SETRTVBRM) Type choices, press Enter. Retrieve Device . . . . . . . *MEDCLS *SAME, *MEDCLS, TAP01... Retrieve Confirmation: Interactive Operation . . . *VERIFY *SAME, *RTVPCY, *VERIFY... Batch Operation . . . . . . *NOTIFY *SAME, *RTVPCY, *VERIFY... Retrieve Authorization . . . . *OBJEXIST *SAME, *RTVPCY, *OBJEXIST... End of Tape Option . . . . . . *REWIND *SAME, *RTVPCY, *REWIND... Option . . . . . . . . . . . . *ALL *SAME, *RTVPCY, *ALL, *NEW... Allow Object Differences . . . *NONE *SAME, *RTVPCY, *NONE, *ALL Object Retention . . . . . . . *NOMAX *SAME, *RTVPCY, 1-9999, *NOMAX Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 281 Figure 157. Retrieve *VERIFY messages (Part 1 of 3) Figure 158. Retrieve *VERIFY messages (Part 2 of 3) Figure 159. Retrieve *VERIFY messages (Part 3 of 3) Display Program Messages Job 016022/KRIS/KRISLUJ started on 10/31/00 at 11:33:06 in subsystem QINT Retrieving PAYMASTFIL in library PAYROLL. (C G I S) Type reply, press Enter. Reply . . . ______________________________________________________________ _______________________________________________________________________________ F3=Exit F12=Cancel Additional Message Information Message ID . . . . . . : BRM1822 Severity . . . . . . . : 99 Message type . . . . . : Inquiry Date sent . . . . . . : 11/02/00 Time sent . . . . . . : 18:14:51 Message . . . . : Retrieving PAYMASTFIL in library PAYROLL. (C G I S) Cause . . . . . : Access to suspended object PAYMASTFIL member PAYDEC94 in library PAYROLL type *FILE is requesting that the object be restored to the system. The size of the object is 51.798 megabytes. The object will be restored to ASP 1 which is currently 84.53 percent utilized. When complete the approximate ASP utilization will be 89.14 percent. Recovery . . . : Type a valid reply for the restore of the object. Possible choices for replying to message . . . . . . . . . . . . . . . : G -- Continue the operation. C -- Cancel the operation. I -- Ignore the request and delay the retrieve operation. To resume a delayed retrieve operation at some later time use the RSMRTVBRM command. More... Additional Message Information Message ID . . . . . . : BRM1822 Severity . . . . . . . : 99 Message type . . . . . : Inquiry S -- Submit the retrieve operation for batch processing. The current job will receive an indication that the object's data was not found. Technical description . . . . . . . . : Access to a suspended object has caused BRMS to attempt to retrieve the object from archives. If the object is a physical file then only the requested member for that file will be restored. 282 Backup Recovery and Media Services for OS/400 You many want to customize the messages shown during retrieve processing or automate the action to be taken under certain circumstances. For additional information and an example of an alternative interface that can be coded by the system administrator to take the Dynamic Retrieval function one step further, see Complementing AS/400 Storage Management Using Hierarchical Storage Management, SG24-4450. The valid responses to the program message display are: G Go: The retrieve begins immediately and the application is suspended waiting for it to complete. You should not use the End request (System Request, option 2) function during this time. C Cancel: This option returns two main messages. BRM1823 is added to the job log indicating that the object was archived and the BRMS/400 retrieve request was cancelled. Also, the standard OS/400 CPF4102 message is sent to enable the application to respond. S Submit Job: The retrieve request is submitted to the job queue specified in the user's job description. Again, the same two messages are sent for the application to handle. A message is later sent to inform the user that the retrieve operation is complete. I Ignore and Delay: The retrieve request is added to the list of file members to be retrieved at a later time. Again, the same two messages are sent for the application to handle. A message is later sent to inform the user that the retrieve operation is complete. 13.7.2.2 *NOTIFY A status message is shown on the last line of the display that is shown in Figure 160. The job waits until restore is complete. As for the immediate (Go) option with *VERIFY mode, the user should not use the End job (System Request, option 2) function during this time. For *NOTIFY and the immediate (Go) option of *VERIFY, such errors as media errors and tape mount messages are reported to the system operator or the BRMS/400 notification message queue. If an error is severe and the operation is cancelled, all messages are added to the user's job log including those responded to by the system operator. The original OS/400 CPF4102 message is sent to the application. Figure 160. Retrieve *NOTIFY message 13.7.2.3 *SBMJOB The BRM1824 message is added to the user's job log to inform the user that the retrieve job is submitted. The open request (the application) is sent the CPF4102 message to handle as an error and enable the user to retry the function at a later time. Bottom Parameters or command ===> dsppfm PAYROLL/PAYMASTFIL F3=Exit F4=Prompt F5=Refresh F6=Create F9=Retrieve F10=Command entry F23=More options F24=More keys Retrieve object PAYMASTFIL for library PAYROLL in progress. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 283 When the batch job is complete, the user is informed with the status of the restore. If the restore fails, the application has already reacted, so the user knows simply not to try that function again until the cause of the error is fixed. 13.7.2.4 *DELAY The BRM1823 message is added to the user's job log to inform the user that the retrieve job is submitted. The open request (the application) is sent the CPF4102 message to handle as an error and enable the user to retry the function at a later time. When the retrieval is resumed later, the user is informed with the status of the restore. As for the *SBMJOB option, if the restore fails, the application has already reacted, so the user knows simply not to try that function again until the cause of error is fixed. 13.7.3 Failed retrieve operations In general, if a retrieve operation fails due to exceptions other than *STORAGE or *SECURITY, it is the submitter's responsibility to retry the retrieve. There is no implication that the failed retrieve is converted to a *DELAY type retrieve. Frequently checking the BRMS/400 log is recommended. Use the *RTV option to check on retrieve operations. 13.7.4 Using the BRMS/400 log The primary method of auditing all BRMS/400 activity is through the BRMS/400 log. This is accessed through the Display Log using BRM (DSPLOGBRM) command. The DSPLOGBRM command supports the display of log entries that record the occurrence, success, and failure of BRMS/400 operations. The same log concept that is used throughout BRMS/400 can also be used to track retrieve operations. Log entries are categorized by type to show which operation caused the log entry. The entry type *RTV is supported for retrieve type operations and is used to record whether these are successful or unsuccessful. A user can search all BRMS/400 log entries using type *RTV to audit retrieve operations. Issue the DSPLOGBRM *RTV command to list all of the available log entries for retrieve operations. 13.8 Controlling retrieve operations using the RSMRTVBRM command A retrieve operation may fail or not even be started because of authority problems or ASP overflow. Other retrieve operations may be initiated later in the *DELAY retrieve mode. In any of these cases, the retrieve operation enters into a deferred state. Further control of these deferred retrieves is needed. The Resume Retrieve using BRM (RSMRTVBRM) command facilitates the recovery of delayed or otherwise unsuccessful retrieve operations. This command allows the administrator to work with or print a list of files for which retrieve operations are pending. This may be due to the following reasons: • Retrieve policy, user, or system operator specified that the retrieve operation be delayed. 284 Backup Recovery and Media Services for OS/400 • ASP to contain the retrieved file has exceeded its storage utilization limit. • User accessing the archived file did not have appropriate authority to perform a retrieve operation. See Backup Recovery and Media Services for OS/400 (part of the IBM Online Library SK2T-2171) for additional information on the RSMRTVBRM command. If you specified *YES for the Confirm Retrieval parameter and are assuming that this is not a batch operation, the confirm display in Figure 161 is shown. Figure 161. Confirm Retrieve display The Confirm Retrieve display shows a list of files for which retrieve operations were delayed or unsuccessful. It allows the user to select and retry, ignore, or cancel the retrieve operation for one or more of the files listed. Use option 1 (Confirm) to select and retry the retrieve operation. Use option 4 (Remove) to cancel the retrieve operation. Leave the option column blank to ignore the retrieve operation and leave it for execution at a later time. 13.8.1 Using the RSMRTVBRM command There are many different ways in which you may approach the use of the RSMRTVBRM command. You may schedule the command to be run every night automatically in batch. Or you may have an operator use the confirm display every twelve hours to initiate valid retrieve operations. 13.8.1.1 Messages sent after retrieves The RSMRTVBRM function sends a completion message to the initiator of the delayed retrieve. Messages are also sent to the requestor for batch submitted retrieves (*SBMJOB). In this section, we refer to retrieves in *DELAY mode or retrieves that were suspended due to potential storage threshold overflows or security violations. 13.8.1.2 Multiple retrieve requests for the same file member When a retrieve operation is in delayed mode, a flag associated with the file member in question is set. The RSMRTVBRM command scans the BRMS/400 records to find all files that are “marked” for delayed retrieve. If a file member is set for delayed retrieve and a second request to retrieve it (in delayed mode) is sent, BRMS/400 checks that file member, establishes that it is already set for retrieve, and adds the user profile name of the second requester to its “users to notify” list. A second request for a delayed retrieve does not cause a second entry in the RSMRTVBRM display even if it is generated by an authority or storage Confirm Retrieve SYSTEM04 Retrieve select . . . : *ALL Type options, press Enter. Press F16 to confirm all. 1=Confirm 4=Remove 5=Display Opt Library Object Member Volume Asp Size (M) User __ QUSRSYS MYFILE MYFILE V00001 01 1.05 BILL __ GLLIB LEDGER LEDGER V00888 02 225.55 TONY __ PAYLIB PAYROLL PAYROLL01 V00999 01 15.05 JIM Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 285 exception. When the file member is eventually retrieved, all requesters are notified. If the second retrieve request is actually a successful *NOTIFY, *VERIFY, or *SBMJOB operation, BRMS/400 marks that file member as having been retrieved and removes the file member from the list of file members to be resumed at a later time. The names of the users on delay list are notified that the successful retrieve operation has occurred. 13.8.1.3 RSMRTVBRM submitted to batch To increase automation, you may want to schedule the submission of the RSMRTVBRM command to occur regularly. We recommend that you choose a time when there is little use of the tape drives and other system resources such as processor and memory. The automatic submission of this command to batch implies that you do not use the confirm display. You must decide which retrieve operations you want to select for initiation at the time that you submit the command to the scheduler. You may want to consider the following suggestions for inclusion criteria: • Library *ALL: Unless you know exactly which libraries contain archived file members, we recommend that you use the *ALL option. If you require extra peace of mind by specifying library names, you need to submit a separate RSMRTVBRM command for each library. This may involve writing a simple CL program. • Auxiliary Storage Pool ID: You may have an application that you know resides in a specific ASP. By specifying this ASP number, you are ensuring that you do not automatically start any unusual or unexpected retrieve operations. In general, however, we recommend that you specify *ALL. • Retrieve Select *DELAY: In most cases, we recommend that you only automatically initiate retrieve operations that were purposefully delayed. Using the *ALL parameter includes retrievals that have been halted because of storage or security considerations. In both of these cases, further operator or administrator action may be required before these retrievals can take place. When you submit the RSMRTVBRM command in batch, we advise you to have someone check for other file members that may be waiting to be retrieved. These include file members that you did not include in your selection criteria in the batch submission of the command and any “failed” retrievals due to authority or storage exceptions. We do not recommend that you include these in the batch submitted command. For example, if you regularly run the RSMRTVBRM *DELAY *RETRIEVE *NO command, there may be other file members waiting to be retrieved that are not retrieved such as those that have *STORAGE or *SECURITY conditions. An interactive “double-check” needs not be performed as regularly as the automatic scheduling of the batch command. In addition, further action may be needed before retrying the failed retrievals such as ASP clear up or security adjustments. You can use the *REPORT option of the RSMRTVBRM command to print any pending retrieves after the *DELAY retrieve operation has been run in batch. Scheduling the batch submission may be performed for your entire system. If you choose to set the system-wide Retrieve Policy to use a retrieve mode of *DELAY, you are effectively queueing up most of the system retrieve requests until a suitable time for significant tape activity. This approach is useful for good 286 Backup Recovery and Media Services for OS/400 balancing of system resources, in particular, your tape devices. However, it reduces the responsiveness of the retrieve system. Note that it can also lead to stocking up a large number of retrieve requests and the time window for completing all of them may affect other necessary tape activity. 13.8.1.4 Using RSMRTVBRM interactively If you plan to use the RSMRTVBRM command interactively, you may or may not choose to use the confirm display. Without the confirm display, you remove the necessity to make decisions for each file member. However, you still need to specify your include parameters. We recommend you use the same ones listed for the previous batch submission. However, by invoking this command interactively without the confirm display only buys you a possible improved performance, which you can also set up by creating a special high priority batch job. This approach ties up an interactive session for an indefinite time period. With the confirm display, you gain much more flexibility. You may choose to perform any of the following options: • Use the display to understand which retrieve operations have been delayed due to storage or security exceptions and take the necessary action to resolve these issues and retry the retrieves. • Use the information on the confirm display to help you roughly estimate the restore times and storage level effects of retrieving certain file members, and on this basis, choose which ones to initiate. • Identify which file members should not even be retrieved and take action to prevent this from occurring, ranging from simply cancelling the retrieve to removing or excluding the file member from your BRMS/400 archive lists. In each of these cases, you must have an administrator who is qualified to make such decisions and take appropriate action to run the RSMRTVBRM command with the confirm display. 13.9 Administration considerations Throughout this chapter, we referred to the setup of the BRMS/400 configuration and how to start with Dynamic Retrieval. Once BRMS/400 is set up and running with the Dynamic Retrieval function, you may consider a few methods of preserving the configuration that you have and controlling the use of the retrieve function. 13.9.1 Retrieve authority We have seen that the retrieve policy allows us to set the authority level requirements for a user to initiate a retrieve. This parameter (Retrieve authorization) is also found in the SETRTVBRM command. Therefore, any user with authority to the SETRTVBRM command may choose to alter the value of this retrieve authority parameter for their job. For example, if the user chooses to alter the parameter to allow users with *USE authority to an object to retrieve that object (by setting the parameter to *USE), that user may “create” objects (through retrieve) that they normally are only able to read. Chapter 13. Practical implementation of hierarchical storage management archiving capabilities 287 If the administrator wants to attempt to restrict the use of the retrieve function, the SETRTVBRM command itself must be restricted. This can be done in the usual way using standard OS/400 security access controls for the command object (type *CMD). However, this also restricts a user's ability to change other retrieve controls such as the retrieve mode to use. When you restrict authority to the SETRTVBRM command, you should also restrict authorities to the WRKPCYBRM command to inhibit users from changing the retrieve policy. See 13.9.3, “Securing the retrieve policy” on page 287, for more details. 13.9.2 Restore options When restoring a file member as part of a retrieve operation, you want to protect your system from importing incorrect versions of a file member or overwriting a recreated file member. Remember that the BRMS/400 retrieve function works with a name-orientated inventory. Therefore, renaming file members or deleting and recreating file members may cause unpredictable results. One way to reduce the chances of retrieving inappropriate data is to use the restore options supplied in the retrieve policy and the SETRTVBRM command. The Allow object differences parameter helps prevent the restoration of deleted and subsequently recreated file members. If you set the parameter to *NONE, the create time stamp and owner information are cross-checked before allowing the restore. Therefore, if a delayed retrieve is finally submitted after a member has been deleted and re-created, the restore operation cannot succeed. 13.9.3 Securing the retrieve policy Effective change management is an important part of every Information System department's quality process. As part of the tight administration controls that you may need to enforce across your entire backup, recovery, archive, and media management system, you need to secure the retrieve function. This may help you manage and control your disk capacity and your tape activity. You must consider the following points: • Secure the WRKPCYBRM command: To prevent users from adjusting the system-wide Retrieve Policy, you must use the standard OS/400 security facilities to restrict authority to the WRKPCYBRM command (object type *CMD). This rejects all attempts at using any part of the command by any unauthorized user. You may consider implementing this as part of a global restriction to all BRMS/400 commands. Remember to identify the key personnel that require access to these commands before you revoke the authority. • Secure the SETRTVBRM command: Revoke authority of all non-BRMS-administrative personnel from the SETRTVBRM command. Remember that this also removes their ability to alter other parameters such as the retrieve mode. • Set up users with an initial program: Where specific users need retrieve parameters set differently from the Retrieve Policy, you may consider the following points: 288 Backup Recovery and Media Services for OS/400 – Include SETRTVBRM in the initial program for the required users. – When compiling this initial program, set the run authority of the program to *OWNER. This adopts the authority of the program object's owner. You may change this parameter after the compile with the CHGPGM command. – Compile the initial program under a user profile that has authority to the SETRTVBRM command. You may change this parameter after the compile with the CHGOBJOWN command. – As an extra measure, you may restrict the authority to the program object itself. © Copyright IBM Corp. 1997, 2001 289 Appendix A. Summary of changes This appendix summarizes the enhancements and changes that you should be aware when migrating from release to release. The release that are covered here include: • V3R6 to V3R7 • V3R1 and V3R6 • V3R1 to V3R2 A.1 Summary of changes for V3R6 to V3R7 This section highlights the changes that you should be aware of when migrating from V3R6 to V3R7. A.1.1 Backup/recovery enhancements Some of the backup/recovery enhancements include: • Console monitoring: A display has been added that requires you to enter a password to end console monitoring. • New special value *LINK for integrated file system backups: A new special value, *LINK, has been added. *LINK saves all objects not in /QSYS.LIB and /QDLS directories. *LINK is now one of the default entries in *BKUGRP (default backup control group) replacing the LINKLIST entry. • Change to the Select Recovery Items display processing for objects: In previous releases, when you selected option 7 (Specify object) on the Select Recovery Items display, you were taken to the Specify Object display. In this release (V3R7), the Specify Object display has been replaced with native OS/400 restore commands, depending on the type of object that you have selected to restore. • Change to the Select Recovery Items display for folders: Option 7 (Specify document) has been added to the Select Recovery Items display. When you use this option, you are taken to the OS/400 Restore Document Library Object (RSTDLO) command. • Restore into folder field added to recovery displays: A new field, Restore into folder, has been added to the Recovery Policy and to the Restore Command Defaults display. This field allows you to specify the name of the folder in which the restored folders and documents to be restored are placed. • Enhancement to control group copy: When you copy a control group (backup or archive) to create a new control group, the job queues to process and subsystems to process are now copied to the new control group from the control group that you are copying. 290 Backup Recovery and Media Services for OS/400 A.1.2 Media management enhancements Some of the media management enhancements include: • Automatic duplication of media: The DUPMEDBRM command is enhanced to allow specification of the special value, *SET, in the FROMVOL parameter. This special value can be used when copying a media set interactively and is required when copying a media set in batch. • Enhanced support for third-party media libraries: In previous releases, you can specify up to seven commands for third-party (*USRDFN) media libraries that you add to BRMS/400. Four commands have been added to the list of commands that you can specify for a third-party media library. – Allocate Device command – Deallocate Device command – Start of Media Movement command – End of Media Movement command • New option on Work with Media display: A new option 20 (Expire set) has been added to the Work with Media display. This option allows you to expire all members of a set rather that expiring each volume individually. A.1.3 Command enhancements Some of the command enhancements include: • New parameter for the STRRCYBRM command: – A new special value (*LNKLIST) has been added to the OPTION parameter in the STRRCYBRM command to allow you to specify an integrated file system list for recovery. The new special value works in conjunction with a new parameter, LIST, where you can specify the name of the list that you want to restore or all integrated file system lists. – The default special value for the OMITLIB parameter has been changed from *NONE to *DELETE. This change allows the user to choose whether to restore deleted libraries rather than assuming that they want to restore deleted libraries. • New parameter for the CHGSCDBRM command: A new special value (*IJS) has been added to the TYPE parameter in the CHGSCDBRM command to allow you to use OS/400 job scheduler. By using this new special value, you do not have to specify the commands (for example, the Add Job command) used in OS/400 job scheduler in the CHGSCDBRM command. • New choice for the Restore Object using BRM (RSTOBJBRM) command: A choice has been added to the OBJ parameter in the RSTOBJBRM command. You can now specify generic object names that you want to restore. • New parameters for the Restore DLO using BRM (RSTDLOBRM) command: Two new parameters have been added to the RSTDLOBRM command: Appendix A. Summary of changes 291 – Restore into folder (RSTFLR): Specifies the name of the folder in which the restored folders and documents to be restored are placed. The folder must exist on the system or when *ALL is specified on the Document library object prompt (DLO parameter, the saved folder must exist on the media. – New object name (RENAME): Specifies the new user-assigned name for the restored document. • Tape unit choices displayed on the INZMEDBRM and ADDMEDBRM commands: The choices of tape units are now displayed on the INZMEDBRM and ADDMEDBRM commands. The tape units that are displayed are those that are set up in BRMS/400. • New Dump BRM (DMPBRM) command: The Dump BRM (DMPBRM) command dumps a copy of BRMS/400 to assist in problem determination. You can specify various levels of detail and one or more jobs to dump. This command produces a file that is used in problem determination by your technical representative. Processing this command should be done in conjunction with this representative. • New special value in the Start Recovery using BRM (STRRCYBRM) command: A new special value, *NONE, has been added to the CTLGRP parameter in the STRRCYBRM command. If you select *NONE for this parameter, this indicates that you want to restore data that is not associated with any control group. • Enhancements to the CHKEXPBRM command: The CHKEXPBRM command has been enhanced to allow you to specify multiple control groups (up to 50) or *ALL in the CTLGRP parameter. You can now evaluate the amount of expired media available for multiple media class and location combinations. A.1.4 Reports Report enhancements include: • New Reports menu (BRMRPT) added: A new Reports menu has been added to the BRMS/400 main menu. The Reports menu contains commonly used reports. • Enhancements to the Recovery Volume Summary report: The Recovery Volume Summary report now includes duplicate volumes where appropriate. A.1.5 General Other general enhancements include: • New user profile QBRMS: A user profile called QBRMS is now created during installation for you on your system if it does not already exist. This user profile is used for internal BRMS/400 purposes and should not be deleted. This change provides the 292 Backup Recovery and Media Services for OS/400 BRMS/400 database with more security in that changes can only be made to the database through BRMS/400 functions or APIs unless the user has a higher assigned authority such as QSECOFR. • Change in default in Archive Policy: In the archive policy, the default value for the “Retain object description” field has been changed from *NO to *YES. • New field in System Policy: A new field, Tape exit trace, has been added to the system policy. You can indicate whether you want to record tape exit information for problem diagnosis by IBM support personnel. The default for this field is *NO and should remain *NO unless instructed otherwise by IBM support personnel. • New choices in control groups and save commands: Two new choices for the OBJDTL prompt have been added to the SAVLIBBRM and SAVOBJLBRM commands. You can now specify *OBJ for object information with no member information or *MBR with object and member information. The choice *MBR is the same as *YES. These two new choices have also been added to the Backup Control Group display, Retain object detail field, and the associated F13 (Change Defaults for Items Added display) key. • Change to Archive Control Group: The save-while-active feature used in the archive control group has been eliminated. • Optimum block size field added to the device displays: A new field has been added to the Add Device, Change Device, and Display Device displays. The field is Use optimum block size. The default value for this field is *NO. You should review the online help information for restrictions when you specify *YES. The Optimum block size field can be a performance enhancement for various device types, for example, device type 3590. A.2 Summary of changes between V3R1 and V3R6 The following list contains enhancements and changes that you should be aware of in upgrading from V3R1 to V3R6. A.2.1 Backup enhancements Some of the backup enhancements include: • File systems support: You can now use BRMS/400 to save and restore integrated file system objects. A new type of list, *LNK, has been added to allow you to enter integrated file system directories and objects that you want to save. Integrated file system backup support allows you to specify directories that are not only in your AS/400 network, but also on attached PCs or other types of systems. • Forecasting media required in backup operations: A new command, Check Expired Media (CHKEXPBRM), has been added to calculate the amount of media available for a save operation. The media that it calculates is compared to a number of expired volumes required in the media Appendix A. Summary of changes 293 policy or in the command. If the number calculated equals or is greater than the value in the media policy or the command, the operation continues. • Console monitoring: Option 4 (Start console monitor) has been added to the Backup menu. This option allows you to start or suspend the console monitor. When the console monitor is started, the console is in a monitored state. By entering the proper password, you can suspend console monitoring and enter system commands. After you are through entering commands, you can return to console monitoring. • Improvement in subsystems to end: The subsystems to end function has been changed to the subsystems to process. This allows you to start or end subsystems. You can end a subsystem at the beginning of control group A and not restart it until the end of control group B. • Improvement in job queues to hold: The job queues to hold function has been changed to the job queues to process. This allows you to hold or release job queues. You can hold a job queue at the beginning of control group A and not release it until the end of control group B. • Enhanced support for Work with Libraries to Omit from backups: You can now specify *ALL in the Type field to omit either a library or group of libraries. The *ALL choice indicates to omit specified libraries when any special value (such as *IBM) or a generic value is used in a backup control group or the SAVLIBBRM command that includes the specified libraries. A.2.2 Media management enhancements Some of the media management enhancements include: • Enhanced BRMS/400 networking: – Selective synchronization of media content information at the library level. You can specify in the Change Network Group display whether you want the local system to receive media information or media content information (library). – Ability to rename the local system. – Add network ID to change media function. – Add the selection “Shared inventory delay”: The system policy has a new field added that allows the customer to set the time to wait for journal entries to be sent over the network to update media files. The longer the time is, the fewer synchronization jobs are submitted. Similarly, the shorter the delay is, the more synchronization jobs are submitted. Use caution in shortening the delay, since depending on the amount of data that you are synchronizing, the performance of the network may be affected. – Network time synchronization: You can synchronize network times for subgroups within the network group (for example, AS/400 systems in Seattle and New York are synchronized to different times even though they are in the same network group). 294 Backup Recovery and Media Services for OS/400 • Automatic duplication of media: A new field has been added to the media policy field that indicates whether media is duplicated that is created under this policy. The DUPMEDBRM command is enhanced to specify *SEARCH that finds volumes that are marked for duplication. • Auto enroll media: You can now specify in the system policy whether to automatically enroll media used in BRMS/400 processing. For each device that you specify, you can determine whether to allow auto enroll of media. Note: Only non-library devices can auto enroll. • Logical end of volume: BRMS/400 now supports a concept called logical end of volume for devices that support it. The benefit that you can derive from this concept is that it allows you to maximize the use of your registered media, therefore, reducing media registration costs and media inventory requirements. The logical end of volume can be described as the last active file on the volume. Any time the special value *END is specified for the file sequence number for output to tape (for example, specifying *END in the SEQNBR parameter in the SAVLIBBRM command or specifying *YES in the Append to media field on a backup control group), for a BRMS/400 volume, BRMS/400 determines the logical end of the volume and redirects the output to start at that position. If all files on the volume are expired, the beginning of the volume is the starting position for the output operation. • Work with Media display: Two new options have been added to the Work with Media display. They are: – Option 18: Mark for duplication – Option 19: Remove mark for duplication • Pre-assignment of slot numbers: You can now pre-assign the slot number assignment when you do a verified move of media. A.2.3 Command enhancements Some of the command enhancements include: • New parameters for save commands: The following commands have had new parameters added. These parameters allow you to specify *NONE on the media policy and specify the parameters for the media policy in the command. You can also change the parameters of a specified media policy “on the fly” for the particular save operation that you are performing. – SAVDLOBRM – SAVLIBBRM – SAVOBJBRM – SAVOBJLBRM – SAVFLRLBRM – SAVMEDIBRM – SAVSYSBRM Appendix A. Summary of changes 295 • Enhanced INZBRM command: The INZBRM command has been enhanced with the *RESET and *DEVICE options. The *RESET option allows you to remove BRMS/400 information and re-initialize all BRMS/400 files. The re-initialization portion of *RESET is equivalent to processing the INZBRM command using the parameter OPTION(*DATA). This option is useful when moving BRMS/400 from one system to another. Use caution when using this option since all BRMS/400 files are re-initialized and data (such as media information) is lost. The *DEVICE option clears device and media library information and adds information for devices currently defined to the system. • Enhanced RSTOBJBRM command: The RSTOBJBRM command has been enhanced to restore *ALL object names. • Enhanced WRKMEDBRM command: The WRKMEDBRM command has been enhanced to add generic support in the volume parameter. A parameter has been added to select media by file group. • Enhanced DSPLOGBRM command: The DSPLOGBRM command has been changed. The SLTDATE parameter has been changed to the PERIOD parameter. This allows you to specify a date and time. Additionally, two new fields have been added: User ID and Message ID. • Enhance system selection on the WRKMEDIBRM command: • Slot numbers have been added to the ADDMEDBRM and the CHGMEDBRM commands. • Major changes to the SAVSAVFBRM command: – Added an ENDOPT parameter. – Allow multiple devices to be specified. – Multiple library selection. – Add a new media policy parameter to allow you to select values for output. – Allows consolidation of save files on the selected media. • Changes to the WRKOBJBRM and WRKFLRBRM commands: The default date range for the SLTDATE parameter has been changed from *CURRENT, *CURRENT to *BEGIN, *END. • Changes to the MOVMEDBRM command: Parameters have been added to the command to allow selection criteria to media that you are selecting to move. • Changes to the SETRTVBRM command: You can now specify how long objects that have been retrieved are kept on the system. After the object retention period has passed, the storage associated with the object is freed. • Changes to the STRBKUBRM command: You can now specify the sequence number and library from which you want to restart backup processing. 296 Backup Recovery and Media Services for OS/400 • New WRKLNKBRM command: A new command has been added to work with saved integrated file system information. You can add, remove, restore, and review information down to the object level. • New CHKEXPBRM command: A new command has been added to check the available media prior to a save. A.2.4 Reports Some of the new report enhancements include: • New report, Link Information report that is in the QP1ADI printer file. • New report, Object Link List = QP1AFS. A.2.5 General Enhanced recoverability feature requires the conversion of all programs to ILE. A.3 Summary of changes from V3R1 to V3R2 You should be aware of the following enhancements and changes when you upgrade from V3R1 to V3R2. A.3.1 Backup enhancements Some of the backup enhancements include: • File systems support: You can now use BRMS/400 to save and restore integrated file system objects. A new type of list, *LNK, has been added to allow you to enter integrated file system directories and objects that you want to save. Integrated file system backup support allows you to specify directories that are not only in your AS/400 network, but also on attached PCs or other types of systems. • Forecasting media required in backup operations: A new command, Check Expired Media (CHKEXPBRM), has been added to calculate the amount of media available for a save or tape operation. The media that it calculates is compared to a number of expired volumes required in the media policy or in the command. If the number calculated equals or is greater than the value in the media policy or the command, the operation continues. • Console monitoring: Option 4 (Start console monitor) has been added to the Backup menu. This option allows you to start or suspend the console monitor. When the console monitor is started, the console is in a monitored state. By entering the proper password, you can suspend console monitoring and enter system commands. After you are finished entering the commands, you can return to console monitoring. A display has also been added that requires you to enter a password to end console monitoring. • Improvement in subsystems to end: The subsystems to end function has been changed to the subsystems to process. This allows you to start or end subsystems. You can end a Appendix A. Summary of changes 297 subsystem at the beginning of control group A and not restart it until the end of control group B. • Improvement in job queues to hold: The job queues to hold function has been changed to the job queues to process. This allows you to hold or release job queues. You can hold a job queue at the beginning of control group A and not release it until the end of control group B. • Enhanced support for Work with Libraries to Omit from backups: You can now specify *ALL in the Type field to omit either a library or group of libraries. The *ALL choice indicates to omit specified libraries when any special value (such as *IBM) or a generic value is used in a backup control group or the SAVLIBBRM command that includes the specified libraries. A.3.2 Media management enhancements Some of the media management enhancements include: • Enhanced BRMS/400 networking: – Selective synchronization of media content information at the library level. You can specify in the Change Network Group display whether you want the local system to receive media information or media content information (library). – Ability to rename the local system. – Add network ID to change media function. – Add the selection “Shared inventory delay”: The system policy has a new field added that allows the customer to set the time to wait for journal entries to be sent over the network to update media files. The longer the time is, the fewer synchronization jobs are submitted. Similarly, the shorter the delay is, the more synchronization jobs are submitted. Use caution when shortening the delay, since depending on the amount of data that you are synchronizing, the performance of the network may be affected. – Network time synchronization: You can synchronize network times for subgroups within the network group (for example, AS/400 systems in Seattle and New York are synchronized to different times even though they are in the same network group). • Common media management chapter A new chapter has been added for common media management. Some material from the media management chapter as well as additional networking information has been used in this chapter. The chapter is designed to provide more detail for customers that want to network BRMS/400 systems and share media information among systems in a network. • Automatic duplication of media: A new field has been added to the media policy field that indicates whether media is duplicated that is created under this policy. The DUPMEDBRM command is enhanced to specify *SEARCH that finds volumes that are marked for duplication. The DUPMEDBRM command is enhanced to allow specification of the special value, *SET, in the FROMVOL parameter. This 298 Backup Recovery and Media Services for OS/400 special value can be used when copying a media set interactively and is required when copying a media set in batch. • Auto enroll media: You can now specify in the system policy whether to automatically enroll media used in BRMS/400 processing. For each device that you specify, you can determine whether to allow auto enroll of media. • Logical end of volume: BRMS/400 now supports a concept called logical end of volume. The benefit that you can derive from this concept is that it allows you to maximize the use of your registered media, therefore, reducing media registration costs and media inventory requirements. The logical end of volume can be described as the last active file on the volume. Any time the special value *END is specified for the file sequence number for output to tape (for example, specifying *END in the SEQNBR parameter in the SAVLIBBRM command or specifying *YES in the Append to media field on a backup control group), for a BRMS/400 volume, BRMS/400 determines the logical end of the volume and redirects the output to start at that position. If all files on the volume are expired, the beginning of the volume is the starting position for the output operation. • Work with Media display: Two new options have been added to the Work with Media display. They are: – Option 18: Mark for duplication – Option 19: Remove mark for duplication • Pre-assignment of slot numbers: You can now pre-assign the slot number assignment when you do a verified move of media. • Enhanced support for third-party media libraries: In previous releases, you could specify up to seven commands for third-party (*USRDFN) media libraries that you add to BRMS/400. Four commands have been added to the list of commands that you can specify for a third-party media library: – Allocate Device command – Deallocate Device command – Start of Media Movement command – End of Media Movement command A.3.3 Command enhancements Some of the command enhancements include: • New parameters for save commands: The following commands have had new parameters added. These parameters allow you to specify *NONE on the media policy and specify the parameters for the media policy in the command. You can also change the parameters of a specified media policy “on the fly” for the particular save operation that you are performing. – SAVDLOBRM – SAVLIBBRM Appendix A. Summary of changes 299 – SAVOBJBRM – SAVOBJLBRM – SAVFLRLBRM – SAVMEDIBRM – SAVSYSBRM • Enhanced INZBRM command: The INZBRM command has been enhanced with *RESET and *DEVICE options. The *RESET option allows you to remove BRMS/400 information and re-initialize all BRMS/400 files. The re-initialization portion of *RESET is equivalent to processing the INZBRM command using the parameter OPTION(*DATA). This option is useful when moving BRMS/400 from one system to another. Use caution when using this option since all BRMS/400 files are re-initialized and data (such as media information) is lost. The *DEVICE option clears device and media library information and adds information for devices currently defined to the system. • Enhanced RSTOBJBRM command: The RSTOBJBRM command has been enhanced to restore *ALL object names. • Enhanced WRKMEDBRM command: The WRKMEDBRM command has been enhanced to add generic support in the volume parameter. A parameter has been added to select media by file group. • Enhanced DSPLOGBRM command: The DSPLOGBRM command has been changed. The SLTDATE parameter has been changed to the PERIOD parameter. This allows you to specify a date and time. Additionally, two new fields have been added: User ID and Message ID. • Enhanced system selection on the WRKMEDIBRM command. • Slot numbers have been added to the ADDMEDBRM and CHGMEDBRM commands. • Major changes to the SAVSAVFBRM command: – Added an ENDOPT parameter. – Allow multiple devices to be specified. – Multiple library selection. – Add a new media policy parameter to allow you to select values for output. – Allows consolidation of save files on the selected media. • Changes to the WRKOBJBRM and WRKFLRBRM commands: The default date range for the SLTDATE parameter has been changed from *CURRENT, *CURRENT to *BEGIN, *END. • Changes to the MOVMEDBRM command: Parameters have been added to the command to allow selection criteria to media that you are selecting to move. • Changes to the SETRTVBRM command: You can now specify how long objects that have been retrieved are kept on the system. After the object retention period has passed, the storage associated with the object is freed. 300 Backup Recovery and Media Services for OS/400 • Changes to the STRBKUBRM command: You can now specify the sequence number and library from which you want to restart backup processing. • New WRKLNKBRM command : A new command has been added to work with saved integrated file system information. You can add, remove, restore, and review information down to the object level. • New CHKEXPBRM command: A new command has been added to check the available media prior to a save. • New parameter for the STRRCYBRM command: – A new special value (*LNKLIST) has been added to the OPTION parameter in the STRRCYBRM command to allow you to specify an integrated file system list for recovery. The new special value works in conjunction with a new parameter, LIST, where you can specify the name of the list that you want to restore or all integrated file system lists. – The default special value for the OMITLIB parameter has been changed from *NONE to *DELETE. This change allows the user to choose whether to restore deleted libraries rather than assuming that they want to restore deleted libraries. • New parameter for the CHGSCDBRM command: A new special value (*IJS) has been added to the TYPE parameter in the CHGSCDBRM command to allow you to use OS/400 job scheduler. By using this new special value, you do not have to specify the commands (for example, the Add Job command) used in OS/400 job scheduler in the CHGSCDBRM command. A.3.4 Reports Some of the report enhancements include: • A new Reports menu has been added to the BRMS/400 main menu. The Reports menu contains commonly used reports. • New report, Link Information report that is in Q1APDI printer file. • New report, Object Link List = QP1AFS. A.3.5 General Other general enhancements include: • Enhanced recoverability feature requires the conversion of all programs to ILE. • A user profile called QBRMS is now created during installation for you on your system. This user profile is used for internal BRMS/400 purposes and should not be deleted. This change provides the BRMS/400 database more security in that changes can only be made to the database through BRMS/400 functions or APIs unless the user has a higher assigned authority such as QSECOFR. © Copyright IBM Corp. 1997, 2001 301 Appendix B. Save and restore tips for better performance There are several factors that affect your save and restore performance such as: • CPU model • Amount of main storage • Tape drive • Data transfer rate of the tape drive • Use of hardware compression or software compression • I/O processor • System bus speeds The performance suggestions mentioned here are aimed at the high performance tape drives such as the 3590. However, other tape drives can also benefit by ensuring that the correct parameters and options are used during your save and restore operation. B.1 Data compression Some of the older AS/400 tape I/O processors provide data compression in the data path hardware. This is referred to as hardware data compression (HDC). Hardware data compression increases the data rate and the tape capacity of the attached tape drive. For data interchange and compatibility with I/O processors that do not provide HDC, the HDC algorithm is implemented in the AS/400 system and is known as system data compression (SDC). SDC provides a performance increase for the entry level tape devices. For high-end tape devices, SDC is a severe limitation to performance. HDC and SDC are controlled by the DTACPR parameter of the SAVxxx commands on the AS/400 system. The choice of using HDC (the DTACPR option on the save commands) and compaction (the COMPACT option on save commands) is important when deciding between faster rates or fewer tapes used. For each of the following tape devices, these are the options that you should have: • 6380 tape device: a. DTACPR(*YES) COMPACT(*NO) b. DTACPR(*YES) COMPACT(*NO) • 6385 tape device (using QIC5010 format cartridge): a. DTACPR(*NO) COMPACT(*DEV) b. DTACPR(*NO) COMPACT(*DEV) • 6385 tape device (using other format cartridges): a. DTACPR(*YES) COMPACT(*DEV) b. DTACPR(*YES) COMPACT(*DEV) • 6390 tape device: a. DTACPR(*NO) COMPACT(*DEV) b. DTACPR(*NO) COMPACT(*DEV) • 2644 IOP (using 3422, 3430, 3480, 3490 tape devices): a. DTACPR(*YES) COMPACT(*NO) b. DTACPR(*YES) COMPACT(*DEV) 302 Backup Recovery and Media Services for OS/400 • 6501 IOP (using 3490, 3570 or 3590 tape devices): a. DTACPR(*DEV) COMPACT(*DEV) b. DTACPR(*DEV) COMPACT(*DEV) Note: Option a provides the best performance. Option b uses fewer tapes. B.2 Load balancing To achieve maximum tape performance, the placement of the tape and the disk IOP is important. The tape IOP should be placed on the system bus that has fewer number of disk arms attached to it. This decreases the likelihood that the bus will be a performance constraint to system save and restore performance. Across all other system buses, the number of disk IOPs should be spread evenly, and the disk arms should be spread evenly across all IOPs. This advice is more helpful when the higher performing tape devices are being used such as the 3590. The data written to the tape must come from the disk drives. Therefore, the sum of disk operations and tape operations must be equal to or less than the system bus bandwidth. Two high performance tape devices on the same bus can create a performance bottleneck, where the tape drives compete with the system bus bandwidth. With the RISC systems, the I/O bus rate has increased from 8 MB/sec to 16 MB/sec or 24 MB/sec depending on the machine type. The higher bus bandwidth now allows you to attach the tape drive and the disk drives on the same bus. B.3 Using the USEOPTBLK parameter For V3R7 and beyond, setting the USEOPTBLK parameter to *YES on the save commands can significantly improve performance of the 3570 and 3590 tape devices. On CISC systems, the block size is 24 KB. With V3R6, this block size was increased to 28 KB. Beginning with V3R7, the block size is 256 KB. This allows better save and restore rates for high performance tape drives such as the 3590 and the 3570. B.4 Additional hints and tips For additional hints and tips, and to gain an understanding of how tape volumes are supported on the AS/400, see Tape and Diskette Device Programming, SC41-4716. © Copyright IBM Corp. 1997, 2001 303 Appendix C. Example LAN configuration for 3494 The following example configurations provide details on how you can configure a 3494 Automated Tape Library Data Server under a LAN environment. C.1 Line description Display Line Description Page 1 Line description . . . . . . . . . : LIND TRN3494 Option . . . . . . . . . . . . . . : OPTION *ALL Category of line . . . . . . . . . : *TRLAN Resource name . . . . . . . . . . : RSRCNAME LIN021 Online at IPL . . . . . . . . . . : ONLINE *YES Vary on wait . . . . . . . . . . . : VRYWAIT *NOWAIT Maximum controllers . . . . . . . : MAXCTL 40 Line speed . . . . . . . . . . . . : LINESPEED 16M Maximum frame size . . . . . . . . : MAXFRAME 1994 TRLAN manager logging level . . . : TRNLOGLVL *OFF Current logging level . . . . . : *OFF TRLAN manager mode . . . . . . . . : TRNMGRMODE *OBSERVING Log configuration changes . . . . : LOGCFGCHG *LOG Token-ring inform of beacon . . . : TRNINFBCN *YES Local adapter address . . . . . . : ADPTADR 4010A0036011 Exchange identifier . . . . . . . : EXCHID 056A0036 Early token release . . . . . . . : ELYTKNRLS *YES Error threshold level . . . . . . : THRESHOLD *OFF Text . . . . . . . . . . . . . . . : TEXT Token Ring Line Description for 3494 --------------Active Switched Controllers-------------- (No active switched controllers attached) SSAP list . . . . . . . . . . . . : SSAP ----Source Service Access Points----- ----Source Service Access Points----- SSAP Maximum Frame Type SSAP Maximum Frame Type 04 *MAXFRAME *SNA AA *MAXFRAME *NONSNA 12 *MAXFRAME *NONSNA C8 *MAXFRAME *HPR Link speed . . . . . . . . . . . . : LINKSPEED 16M Cost/connect time . . . . . . . . : COSTCNN 0 Cost/byte . . . . . . . . . . . . : COSTBYTE 0 Security for line . . . . . . . . : SECURITY *NONSECURE Propagation delay . . . . . . . . : PRPDLY *LAN User-defined 1 . . . . . . . . . . : USRDFN1 128 User-defined 2 . . . . . . . . . . : USRDFN2 128 User-defined 3 . . . . . . . . . . : USRDFN3 128 Autocreate controller . . . . . . : AUTOCRTCTL *YES Autodelete controller . . . . . . : AUTODLTCTL 1440 Recovery limits . . . . . . . . . : CMNRCYLMT Count limit . . . . . . . . . . : 2 Time interval . . . . . . . . . : 5 Functional address . . . . . . . . . . . . . . : FCNADR ---------------------Functional Addresses---------------------- (No functional addresses found) C.2 Controller description Display Controller Description Page 1 Controller description . . . . . . : CTLD CTL3494 Option . . . . . . . . . . . . . . : OPTION *ALL Category of controller . . . . . . : *APPC Link type . . . . . . . . . . . . : LINKTYPE *LAN Online at IPL . . . . . . . . . . : ONLINE *YES Character code . . . . . . . . . . : CODE *EBCDIC Maximum frame size . . . . . . . . : MAXFRAME 16393 Remote network identifier . . . . : RMTNETID *NETATR Remote control point . . . . . . . : RMTCPNAME LIBMGR Initial connection . . . . . . . . : INLCNN *DIAL Dial initiation . . . . . . . . . : DIALINIT *LINKTYPE Switched disconnect . . . . . . . : SWTDSC *YES Data link role . . . . . . . . . . : ROLE *NEG LAN remote adapter address . . . . : ADPTADR 400000003494 LAN DSAP . . . . . . . . . . . . . : DSAP 04 LAN SSAP . . . . . . . . . . . . . : SSAP 04 304 Backup Recovery and Media Services for OS/400 Autocreate device . . . . . . . . : AUTOCRTDEV *ALL Text . . . . . . . . . . . . . . . : TEXT 3494 LAN Controller Descript ion Switched line list . . . . . . . . : SWTLINLST --------------------Switched Lines--------------------- TRN3494 Attached devices . . . . . . . . . : DEV -------------------Attached Devices-------------------- DEV3494 APPN-capable . . . . . . . . . . . : APPN *YES APPN CP session support . . . . . : CPSSN *YES APPN/HPR capable . . . . . . . . . : HPR *YES APPN node type . . . . . . . . . . : NODETYPE *ENDNODE APPN transmission group number . . : TMSGRPNBR 1 APPN minimum switched status . . . : MINSWTSTS *VRYONPND Autodelete device . . . . . . . . : AUTODLTDEV 1440 User-defined 1 . . . . . . . . . . : USRDFN1 *LIND User-defined 2 . . . . . . . . . . : USRDFN2 *LIND User-defined 3 . . . . . . . . . . : USRDFN3 *LIND Model controller description . . . : MDLCTL *NO Control owner . . . . . . . . . . : CTLOWN *USER Disconnect timer . . . . . . . . . : DSCTMR Minimum connect timer . . . . . : 170 Disconnection delay timer . . . : 30 LAN frame retry . . . . . . . . . : LANFRMRTY *CALC LAN connection retry . . . . . . . : LANCNNRTY *CALC LAN response timer . . . . . . . . : LANRSPTMR *CALC LAN connection timer . . . . . . . : LANCNNTMR *CALC LAN acknowledgement timer . . . . : LANACKTMR *CALC LAN inactivity timer . . . . . . . : LANINACTMR *CALC LAN acknowledgement frequency . . : LANACKFRQ *CALC LAN max outstanding frames . . . . : LANMAXOUT *CALC LAN access priority . . . . . . . : LANACCPTY *CALC LAN window step . . . . . . . . . : LANWDWSTP *NONE Display Controller Description Page 2 Controller description . . . . . . : CTLD CTL3494 Option . . . . . . . . . . . . . . : OPTION *ALL Category of controller . . . . . . : *APPC Recovery limits . . . . . . . . . : CMNRCYLMT Count limit . . . . . . . . . . : 2 Time interval . . . . . . . . . : 5 C.3 Device description Device description . . . . . . . . : DEVD DEV3494 Option . . . . . . . . . . . . . . : OPTION *ALL Category of device . . . . . . . . : *APPC Automatically created . . . . . . : NO Remote location . . . . . . . . . : RMTLOCNAME LIBMGR Online at IPL . . . . . . . . . . : ONLINE *YES Local location . . . . . . . . . . : LCLLOCNAME *NETATR Remote network identifier . . . . : RMTNETID *NETATR Attached controller . . . . . . . : CTL CTL3494 Message queue . . . . . . . . . . : MSGQ QSYSOPR Library . . . . . . . . . . . . : *LIBL Local location address . . . . . . : LOCADR 00 APPN-capable . . . . . . . . . . . : APPN *YES Single session . . . . . . . . . . : SNGSSN Single session capable . . . . . : *NO Text . . . . . . . . . . . . . . . : TEXT 3494 APPC Device Description Mode . . . . . . . . . . . . . . . : MODE -------------------------Mode-------------------------- *NETATR © Copyright IBM Corp. 1997, 2001 305 Appendix D. Performing restricted saves to a 3494 on CISC You may use the example program shown in Figure 162 on page 306 through Figure 164 on page 308 to perform restricted saves on a 3494 tape library using BRMS/400 with CISC AS/400 systems only. You do not need to use this circumvention for RISC AS/400 systems. The BRMSAVSYS program manages the subsystem and message queues to perform the save. It calls the SETBRMVOL program to create a tape category and add volumes to it. 306 Backup Recovery and Media Services for OS/400 Figure 162. Program for restricted save processing with the 3494 /**************************************************************/ /* PROGRAM: BRMSAVSYS */ /**************************************************************/ PGM PARM(&DEV) DCL VAR(&DEV) TYPE(*CHAR) LEN(10) /**************************************************************/ /* Define the media policy to be used. */ /**************************************************************/ DCL VAR(&MEDPCY) TYPE(*CHAR) LEN(10) VALUE(SAVSYS) /**************************************************************/ /* Make sure that QSYSOPR does not interrupt the save. */ /**************************************************************/ CHGMSGQ MSGQ(QSYSOPR) DLVRY(*HOLD) MONMSG MSGID(CPF0000) /**************************************************************/ /* Call program SETBRMVOL to create a category and add */ /* volumes to it in the order BRM will expect them */ /**************************************************************/ CALL PGM(SETBRMVOL) PARM(&DEV ADD) MONMSG MSGID(CPF0000) /**************************************************************/ /* Rename QMLDSBS so that BRMS/400 will not be able to */ /* start it after the SAVSYSBRM command completes */ /**************************************************************/ ENDMLD OPTION(*IMMED) DLYJOB DLY(60) RNMOBJ OBJ(QMLD/QMLDSBS) OBJTYPE(*SBSD) NEWOBJ(QMLDSBSTMP) MONMSG MSGID(CPF0000) /**************************************************************/ /* Perform the system save */ /**************************************************************/ SAVSYSBRM DEV(&DEV) MEDPCY(&MEDPCY) ENDOPT(*LEAVE) + CLEAR(*ALL) STRCTLSBS(*NO) MONMSG MSGID(CPF0000) /**************************************************************/ /* Save the following libraries while still restricted. */ /**************************************************************/ SAVLIBBRM LIB(QGPL) DEV(&DEV) MEDPCY(&MEDPCY) + SAVTYPE(*FULL) ENDOPT(*LEAVE) SEQNBR(*END) MONMSG MSGID(CPF0000) SAVLIBBRM LIB(QUSRSYS) DEV(&DEV) MEDPCY(&MEDPCY) + SAVTYPE(*FULL) ENDOPT(*LEAVE) SEQNBR(*END) MONMSG MSGID(CPF0000) SAVLIBBRM LIB(QUSRMLD QMLD) DEV(&DEV) MEDPCY(&MEDPCY) + SAVTYPE(*FULL) SEQNBR(*END) MONMSG MSGID(CPF0000) /**************************************************************/ /* Rename the subsystem */ /**************************************************************/ RNMOBJ OBJ(QMLD/QMLDSBSTMP) OBJTYPE(*SBSD) NEWOBJ(QMLDSBS) MONMSG MSGID(CPF0000) /**************************************************************/ /* Call SETBRMVOL to change the volume category to */ /* *NOSHARE and delete the catagory we created. */ /**************************************************************/ INZMLD DLYJOB DLY(360) CALL PGM(SETBRMVOL) PARM(&DEV RMV) MONMSG MSGID(CPF0000) ENDPGM /**************************************************************/ Appendix D. Performing restricted saves to a 3494 on CISC 307 Figure 163. CL program to create a tape category and add volumes (Part 1 of 2) /**************************************************************/ /* Program SETBRMVOL */ /**************************************************************/ PGM PARM(&DEV &OPTION) DCLF FILE(QTEMP/BRMVOL) /* File of expired volumes */ DCL VAR(&DEV) TYPE(*CHAR) LEN(10) /* Tape device name. */ DCL VAR(&OPTION) TYPE(*CHAR) LEN(3) /* ADD or RMV */ DCL VAR(&COUNT) TYPE(*DEC) LEN(2 0) VALUE(1) DCL VAR(&SYSNAME) TYPE(*CHAR) LEN(10) DCL VAR(&LOCNAME) TYPE(*CHAR) LEN(10) DCL VAR(&QRYSLT) TYPE(*CHAR) LEN(75) DCL VAR(&CTG) TYPE(*CHAR) LEN(10) /* Category name. */ /**************************************************************/ /* Define the number of volumes to add to this category */ /**************************************************************/ DCL VAR(&VOLCNT) TYPE(*DEC) LEN(2 0) VALUE(7) /**************************************************************/ /* Define the name of the 3494 */ /**************************************************************/ DCL VAR(&MLDNAME) TYPE(*CHAR) LEN(10) VALUE('MLD01 ') /**************************************************************/ /* Define the media class */ /**************************************************************/ DCL VAR(&MEDCLS) TYPE(*CHAR) LEN(10) VALUE('SAVSYS ') /**************************************************************/ /* Use the SYSTEM name as the temporary category */ /* BRMS/400 uses the LOCAL LOCATION name on the volume record */ /**************************************************************/ RTVNETA SYSNAME(&SYSNAME) LCLLOCNAME(&LOCNAME) /**************************************************************/ /* Option ADD processing. */ /**************************************************************/ IF COND(&OPTION *EQ 'ADD') THEN(DO) /* Create the category. */ CHGVAR VAR(&CTG) VALUE(&SYSNAME) CRTCTGMLD MLD(&MLDNAME) CTG(&CTG) MONMSG MSGID(CPF0000) /* Determine the volumes BRMS/400 will use */ CHGVAR VAR(&QRYSLT) VALUE('TMCEND *EQ Y *AND TMCCLS *EQ ' + || &MEDCLS || ' *AND TMCVLT *EQ ' || &MLDNAME + ' *AND TMSYID *EQ ' || &LOCNAME) OPNQRYF FILE((QUSRBRM/QA1A1MM)) QRYSLT(&QRYSLT) + KEYFLD(*FILE) OPNID(QA1AMMTMP) CPYFRMQRYF FROMOPNID(QA1AMMTMP) TOFILE(QTEMP/BRMVOL) + MBROPT(*REPLACE) CRTFILE(*YES) ENDDO /**************************************************************/ /* Option RMV processing. */ /**************************************************************/ IF COND(&OPTION *EQ 'RMV') THEN(DO) CHGVAR VAR(&CTG) VALUE('*NOSHARE') ENDDO LOOP: RCVF /* Change the category the volume is assigned to. */ CHGMEDMLD MLD(&MLDNAME) VOL(&TMCVSR) CTG(&CTG) CHGVAR VAR(&COUNT) VALUE(&COUNT + 1) IF COND(&COUNT *NE &VOLCNT) THEN(GOTO CMDLBL(LOOP)) 308 Backup Recovery and Media Services for OS/400 Figure 164. CL program to create a tape category and add volumes (Part 2 of 2) /**************************************************************/ /* Option ADD processing */ /**************************************************************/ IF COND(&OPTION *EQ 'ADD') THEN(DO) /* Vary on the tape drive to be used. */ VRYCFG CFGOBJ(&DEV) CFGTYPE(*DEV) STATUS(*ON) RANGE(*OBJ) MONMSG MSGID(CPF0000) /* Mount the category. */ MNTCTGMLD DEV(&DEV) CTG(&CTG) ENDDO /**************************************************************/ /* Option RMV processing */ /**************************************************************/ ELSE CMD(DO) /* Make sure the tape drive is varied on. */ VRYCFG CFGOBJ(&DEV) CFGTYPE(*DEV) STATUS(*ON) RANGE(*OBJ) MONMSG MSGID(CPF0000) /* Demount the category. */ DMTCTGMLD DEV(&DEV) MONMSG MSGID(CPF0000) /* Delete the category. */ DLTCTGMLD MLD(&MLDNAME) CTG(&CTG) MONMSG MSGID(CPF0000) /* Vary off the tape drive */ VRYCFG CFGOBJ(&DEV) CFGTYPE(*DEV) STATUS(*OFF) RANGE(*OBJ) MONMSG MSGID(CPF0000) ENDDO ENDPGM /**************************************************************/ © Copyright IBM Corp. 1997, 2001 309 Appendix E. Media missing from the 3494 You may use the example program shown in Figure 165 and Figure 166 on page 310 to identify which tapes are found in BRMS/400 but that are not shown in Library Manager. The program first performs the DSPTAPCTG command to an output file and then it calls the MLDQRY query to compare this file with the media management file (QA1AMM) in the QUSRBRM library. This is only one example of a query that might be run to identify volume mismatches between BRMS/400 and the tape library. Figure 165. Example program to identify volume mismatches Note: We used MLD01 (highlighted in bold in Figure 165) in our example for the media library device. You need to substitute this parameter with the media library device name that you have on your system. /**************************************************************/ /* PROGRAM: MLDPGM */ /**************************************************************/ PGM DCL VAR(&MSGDTA) TYPE(*CHAR) LEN(256) DCL VAR(&MSGF) TYPE(*CHAR) LEN(10) DCL VAR(&MSGFLIB) TYPE(*CHAR) LEN(10) DCL VAR(&MSGID) TYPE(*CHAR) LEN(7) MONMSG MSGID(CPF0000 MCH0000) EXEC(GOTO CMDLBL(ERROR)) /**************************************************************/ /* File Override */ /**************************************************************/ OVRPRTF FILE(QPPGMDMP) HOLD(*YES) /**************************************************************/ /* Display MLD information and run the query */ /**************************************************************/ DLTF FILE(QTEMP/TEMP1) MONMSG MSGID(CPF0000) DSPTAPCTG MLD(MLD01) CGY(*SHARE400) OUTPUT(*OUTFILE) + OUTFILE(QTEMP/TEMP1) RUNQRY QRY(QGPL/MLDQRY) RETURN /**************************************************************/ /* Default error handler */ /**************************************************************/ ERROR: RCVMSG MSGTYPE(*EXCP) MSGDTA(&MSGDTA) MSGID(&MSGID) + MSGF(&MSGF) MSGFLIB(&MSGFLIB) SNDPGMMSG MSSID(&MSGID) MSGF(&MSGFLIB/&MSGF) + MSGDTA(&MSGDTA) MSGTYPE(*ESCAPE) MONMSG MSGID(CPF0000 MCH0000) CHGJOB LOG(4 0 *SECLVL) LOGCLPGM(*YES) DSPJOBLOG OUTPUT(*PRINT) ENDPGM /**************************************************************/ 310 Backup Recovery and Media Services for OS/400 Figure 166. Example query to identify volume mismatches 5716QU1 V3R7M0 960517 IBM Query/400 SYSTEM01 7/06/00 20:12:27 Page 1 Query . . . . . . . . . . . . . . . . . MLDQRY Library . . . . . . . . . . . . . . . QGPL Query text . . . . . . . . . . . . . . Query CCSID . . . . . . . . . . . . . . 65535 Query language id . . . . . . . . . . . ENU Query country id . . . . . . . . . . . US Collating sequence . . . . . . . . . . Hexadecimal Processing options Use rounding . . . . . . . . . . . . Yes (default) Ignore decimal data errors . . . . . No (default) Ignore substitution warnings . . . . Yes Use collating for all compares . . . Yes Selected files ID File Library Member Record Format T01 QA1AMM QUSRBRM *FIRST QA1AMMR T02 TEMP1 QTEMP *FIRST QTAVOUTF Join tests Type of join . . . . . . . . . . . . . Unmatched records with primary file Field Test Field T02.RDMID EQ T01.TMCVSR Select record tests AND/OR Field Test Value (Field, Numbers, or 'Characters') T01.TMCVLT EQ 'MLD01' AND T01.TMCEND EQ 'Y' Ordering of selected fields Field Sort Ascending/ Break Field Name Priority Descending Level Text T01.TMSYID 10 A SYSTEM ID T01.TMCVSR 20 A VOLUME SERIAL NUMBER T01.TMCCLS MEDIA CLASS T01.TMCEND EXPIRED INDICATOR Report column formatting and summary functions Summary functions: 1-Total, 2-Average, 3-Minimum, 4-Maximum, 5-Count Overrides Field Summary Column Dec Null Dec Numeric Name Functions Spacing Column Headings Len Pos Cap Len Pos Editing T01.TMSYID 0 System 8 ID T01.TMCVSR 5 2 Volume 6 Serial T01.TMCCLS 2 Media 10 Class T01.TMCEND 2 Expired 1 Indicator Selected output attributes Output type . . . . . . . . . . . . . . Printer Form of output . . . . . . . . . . . . Detail Line wrapping . . . . . . . . . . . . . No © Copyright IBM Corp. 1997, 2001 311 Appendix F. The QUSRBRM library There can often be a requirement for information that is not readily available from within the BRMS/400 reports or displays. This requirement may be the sort of thing that a simple query or SQL can solve. With that in mind, we have included a list of the BRMS/400 files in the QUSRBRM library with a brief description of the more important ones. To see the field names, run the command: DSPFFD QUSRBRM/filename OUTPUT(*print) The following files are currently valid for both V3R2M0 and V3R6M0: Object Type Attribute Description ------------------------------------------------------------------------ QJ1ACM *JRN BRMS Journal QA1AAF *FILE PF-DTA Archive Folder Lists QA1AAG *FILE PF-DTA Archive Control Groups Entries QA1AAM *FILE PF-DTA Archive Control Group Attributes QA1AAO *FILE PF-DTA Archive Object List Entries QA1AAQ *FILE PF-DTA Archive Spool File List Entries QA1AARC *FILE PF-DTA QA1AARF *FILE PF-DTA QA1AAX *FILE PF-DTA Archive Policy Attributes QA1ABMM *FILE PF-DTA Media - Media Class and Volume QA1ABX *FILE PF-DTA Backup Policy Attributes QA1ACA *FILE PF-DTA Calendar Names QA1ACG *FILE PF-DTA Backup Control Group Entries QA1ACM *FILE PF-DTA Backup Control Group Attributes QA1ACN *FILE PF-DTA Container Status QA1ACT *FILE PF-DTA Container Classes QA1ADI *FILE PF-DTA QA1ADV *FILE PF-DTA Device Name Entries QA1ADXR *FILE PF-DTA Media Information by Volume QA1AFD *FILE PF-DTA Folder Save History QA1AFL *FILE PF-DTA Folder List Names QA1AFS *FILE PF-DTA QA1AHS *FILE PF-DTA Save History Details QA1AIA *FILE PF-DTA Subsystems to check before IPL QA1AJH *FILE PF-DTA Job Queues to hold QA1ALB *FILE PF-DTA QA1ALG *FILE PF-DTA BRM Log Information QA1ALI *FILE PF-DTA QA1ALM *FILE PF-DTA Backup and Archive lists QA1ALQ *FILE PF-DTA Backup Spool File List Entries QA1ALR *FILE PF-DTA Save History - Save Statistics by Library QA1AMB *FILE PF-DTA Save History - Save Statistics by Object QA1AMD *FILE PF-DTA Media Library Device Entries QA1AME *FILE PF-DTA Media Policy Attributes QA1AMM *FILE PF-DTA Media Inventory QA1AMP *FILE PF-DTA Move Policies QA1AMT *FILE PF-DTA Media Class Attributes QA1AMV *FILE PF-DTA QA1ANET *FILE PF-DTA Network Save History QA1AOB *FILE PF-DTA Backup Object List Entries QA1AOD *FILE PF-DTA Object Detail QA1AOL *FILE PF-DTA Libraries to omit from backups QA1AOQ *FILE PF-DTA Save History - Spool Files QA1AOT *FILE PF-DTA Object types QA1ARA *FILE PF-DTA Recovery Activities QA1ARC *FILE PF-DTA Recovery Contacts QA1ARCY *FILE PF-DTA Recovery Records (STRRCYBRM) QA1ARMT *FILE PF-DTA Network Group QA1ARP *FILE PF-DTA Retrieve Policy QA1ARX *FILE PF-DTA Recovery Policy QA1ASE *FILE PF-DTA Subsystems to end QA1ASG *FILE PF-DTA Signoff Exceptions QA1ASL *FILE PF-DTA Storage Locations QA1ASP *FILE PF-DTA System Policy QA1ASRC *FILE PF-SRC Printer File Source QA1AVER *FILE PF-DTA Version Control QA1AWAG *FILE PF-DTA QA1AWCG *FILE PF-DTA QA1AWSF *FILE PF-DTA QA1A1CA *FILE PF-DTA Calendar Entries 312 Backup Recovery and Media Services for OS/400 QA1A1MP *FILE PF-DTA Move Policy Entries QA1A1RA *FILE PF-DTA Recovery Activity Information QA1A1RMT *FILE PF-DTA Network Group Entries QA1A2NET *FILE PF-DTA Network - Save History QA1A2RCY *FILE PF-DTA QA1A8ARF *FILE PF-DTA QA1A9ARF *FILE PF-DTA © Copyright IBM Corp. 1997, 2001 313 Appendix G. QUSRBRM/QA1AMM file specifications: V3R1 Field Length Position Field Text TMCDAT 6 1 Date Stamp TMCTIM 6 7 Time Stamp TNCVSR 6 13 Volume Serial Number TMSYID 8 19 System ID TMCCLS 10 27 Media Class TMCEXP 7 37 Expiration Date TMCCRT 7 44 Creation Date TMCCTM 6 51 Creation Time Stamp TMCEND 1 57 Expired Indicator TMCBTH 6 58 First Use Date TMCVLT 10 64 Vault Name TMCOAD 7 74 Out of Area Date TMCONT 10 81 Container ID TMMPOL 10 91 Move Policy TMCFRM 1 101 Move Confirmation TMCJOB 10 102 Job Name TMUSER 10 112 Last User ID TNJNBR 6 122 Job Number TMCPRV 10 128 Previous Location Name TMCNXT 10 138 Next Location name TMCSMD 7 148 Scheduled Move Date TMCLAB 1 155 Label Print Pending TMUSES 4 156 Total Uses of the media TMTMRD 4 160 Temp Read Error Total TMTMWR 4 164 Temp Write Error Total TMBYRD 8 168 Bytes Read Total TMBYWR 8 176 Bytes Written Total TMBYCR 6 184 Current Bytes Written TMBYMX 6 190 Maximum Bytes Written TMRTRY 4 196 Read Retry Error Total TMWTRY 4 200 Write Retry Error total TMCCLN 4 204 Last Clean Date TMCUSE 4 208 Uses Since Last Cleaning TMVSEC 4 212 Secure Volume TMVSEQ 4 216 Volume Sequence TMTVOL 4 220 Total Volumes in set TMBVOL 6 224 Beginning Volume TMSLOT 6 230 Slot Number TMDUPL 1 236 Duplication Type TMLDEV 10 237 Last Device TMCPSL 6 247 Previous Slot TMTEXT 50 253 Volume Description TMFILG 10 303 File Group TMGTYP 10 313 Media Group Type TMGRID 13 323 Media Group Identification TMRGSY 8 336 Registered System TMRGCK 20 344 TMCNET 8 364 TMCCAB 20 372 TMCPSF 10 392 TMCBKY 10 402 TMCSMC 1 412 TMCDVT 4 413 TMCJRC 1 417 314 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 315 Appendix H. QUSRBRM/QA1AMM file specifications: V3R2/V3R6/V3R7 Field Length Position Field Text TMCDAT 6 1 Date Stamp TMCTIM 6 7 Time Stamp TNCVSR 6 13 Volume Serial Number TMSYID 8 19 System ID TMCCLS 10 27 Media Class TMCEXP 7 37 Expiration Date TMCCRT 7 44 Creation Date TMCCTM 6 51 Creation Time Stamp TMCEND 1 57 Expired Indicator TMCBTH 6 58 First Use Date TMCVLT 10 64 Vault Name TMCOAD 7 74 Out of Area Date TMCONT 10 81 Container ID TMMPOL 10 91 Move Policy TMCFRM 1 101 Move Confirmation TMCJOB 10 102 Job Name TMUSER 10 112 Last User ID TNJNBR 6 122 Job Number TMCPRV 10 128 Previous Location Name TMCNXT 10 138 Next Location name TMCSMD 7 148 Scheduled Move Date TMCLAB 1 155 Label Print Pending TMUSES 4 156 Total Uses of the media TMTMRD 4 160 Temp Read Error Total TMTMWR 4 164 Temp Write Error Total TMBYRD 8 168 Bytes Read Total TMBYWR 8 176 Bytes Written Total TMBYCR 6 184 Current Bytes Written TMBYMX 6 190 Maximum Bytes Written TMRTRY 4 196 Read Retry Error Total TMWTRY 4 200 Write Retry Error total TMCCLN 4 204 Last Clean Date TMCUSE 4 208 Uses Since Last Cleaning TMVSEC 4 212 Secure Volume TMVSEQ 4 216 Volume Sequence TMTVOL 4 220 Total Volumes in set TMBVOL 6 224 Beginning Volume TMSLOT 6 230 Slot Number TMDUPL 1 236 Duplication Type TMLDEV 10 237 Last Device TMCPSL 6 247 Previous Slot TMTEXT 50 253 Volume Description TMFILG 10 303 File Group TMGTYP 10 313 Media Group Type TMGRID 13 323 Media Group Identification TMRGSY 8 336 Registered System TMRGCK 20 344 TMCNET 8 364 TMCCAB 20 372 TMCPSF 10 392 TMCBKY 10 402 TMCSMC 1 412 TMCDVT 4 413 TMCJRC 1 417 TMVNXT 10 418 TMVSMD 7 428 TMASLT 6 435 316 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 317 Appendix I. Special notices This publication is intended to help customers, business partners, and IBM Availability Services personnel understand the important considerations of planning and managing Backup Recovery and Media Services for OS/400 (BRMS/400) in a single system environment or in a networked environment. The information in this publication is not intended as the specification of any programming interfaces that are provided by OS/400 and BRMS/400. See the PUBLICATIONS section of the IBM Programming Announcement for OS/400 and BRMS/400 for more information about what publications are considered to be product documentation. References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: e (logo)® IBM  Redbooks Redbooks Logo 318 Backup Recovery and Media Services for OS/400 The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others. AFP AIX APL2 Application System/400 APPN AS/400 AS/400e AT CT Current DB/2 ES/9000 Magstar MQSeries Netfinity OS/2 OS/400 RMF RPG/400 RS/6000 S/370 SP SQL/400 System/390 XT 400 Lotus Approach Lotus Notes Domino Notes Tivoli TME © Copyright IBM Corp. 1997, 2001 319 Appendix J. Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. J.1 IBM Redbooks For information on ordering these ITSO publications see www.redbooks.ibm.com • iSeries Handbook, GA19-5486 • Setting Up and Implementing ADSTAR Distributed Storage Manager/400, GG24-4460 • Complementing AS/400 Storage Management Using Hierarchical Storage Management, SG24-4450 • Using ADSM to Back Up Lotus Notes, SG24-4534 • Upgrading to Advanced Series PowerPC AS, SG24-4600 J.2 IBM Redbooks collections Redbooks are also available on the following CD-ROMs. Click the CD-ROMs button at ibm.com/redbooks for information about all the CD-ROMs offered, updates and formats. J.3 Other resources These publications are also relevent as further information sources. • IBM 3494 Users Guide: Media Library Device Driver for Application System/400, GC35-0153 • IBM 9427 210 and 211 Operator’s Guide, SA26-7108 • Work Management, SC21-8078 • Software Installation Guide, SC41-3120 (V3R2) • Central Site Distribution, SC41-3308 • Automated Tape Library Planning and Management, SC41-5309 CD-ROM Title Collection Kit Number IBM System/390 Redbooks Collection SK2T-2177 IBM Networking Redbooks Collection SK2T-6022 IBM Transaction Processing and Data Management Redbooks Collection SK2T-8038 IBM Lotus Redbooks Collection SK2T-8039 Tivoli Redbooks Collection SK2T-8044 IBM AS/400 Redbooks Collection SK2T-2849 IBM Netfinity Hardware and Software Redbooks Collection SK2T-8046 IBM RS/6000 Redbooks Collection SK2T-8043 IBM Application Development Redbooks Collection SK2T-8037 IBM Enterprise Storage and Systems Management Solutions SK3T-3694 320 Backup Recovery and Media Services for OS/400 The following publications are available from the iSeries Information Center in soft copy only: • System Operation, SC41-4203 • AS/400 APPN Support, SC41-5407 • Integrated File System Introduction, SC41-5711 The following publications are available only on CD-ROM. For more information, please visit the iSeries Information Center at: http://publib.boulder.ibm.com/pubs/html/as400/ic2924/info/index.htm • AS/400 Road Map for Changing to PowerPC Technology, SA41-4150 • OS/400 NetWare Integration Support, SC41-3124 • Automated Tape Library Planning Guide, SC41-3309 (V3R7) • SNA Distribution Services, SC41-3410 • OS/400 Integration of Lotus Notes, SC41-3431 • Software Installation Guide, SC41-4120 (V3R7) • Integrating AS/400 with Novell NetWare, SC41-4124 • Backup and Recovery - Basic, SC41-4304 (V3R7) • Backup and Recovery - Advanced, SC41-4305 (V3R7) • Distributed Data Management, SC41-4307 • Data Management, SC41-4710 • Tape and Diskette Device Programming, SC41-4716 The following publications are available in the IBM Online Library CD-ROM SK2T-2171: • LAN Server/400 Administration • Backup Recovery and Media Services for OS/400 J.4 Referenced Web sites These Web sites are also relevant as further information sources: • AS/400 Internet home page: http://www.as400.ibm.com • AS/400 Service home page: http://as400service.rochester.ibm.com © Copyright IBM Corp. 1997, 2001 321 How to get IBM Redbooks This section explains how both customers and IBM employees can find out about IBM Redbooks, redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided. • Redbooks Web Site ibm.com/redbooks Search for, view, download, or order hardcopy/CD-ROM Redbooks from the Redbooks Web site. Also read redpieces and download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. • E-mail Orders Send orders by e-mail including information from the IBM Redbooks fax order form to: • Telephone Orders • Fax Orders This information was current at the time of publication, but is continually subject to change. The latest information may be found at the Redbooks Web site. In United States or Canada Outside North America e-mail address pubscan@us.ibm.com Contact information is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl United States (toll free) Canada (toll free) Outside North America 1-800-879-2755 1-800-IBM-4YOU Country coordinator phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl United States (toll free) Canada Outside North America 1-800-445-9269 1-403-267-4455 Fax phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl IBM employees may register for information on workshops, residencies, and Redbooks by accessing the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button. Look in the Materials repository for workshops, presentations, papers, and Web pages developed and written by the ITSO technical professionals; click the Additional Materials button. Employees may access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements. IBM Intranet for Employees 322 Backup Recovery and Media Services for OS/400 IBM Redbooks fax order form Please send me the following: We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment. Title Order Number Quantity First name Last name Company Address City Postal code Telephone number Telefax number VAT number Invoice to customer number Country Credit card number Credit card expiration date Card issued to Signature Index 323 Index Symbols #2644 151 #5211 151 #5213 151 #5219 151 #5220 151 #5226 151 #5228 151 #5229 151 *ALLDLO 39 *ARCPCY 31 *BKUGRP 38 *BKUPCY 31 *DATA 14, 167, 181 *DELAY 232, 241, 283 *DELAY, retrieve 283 *DEVICE 14, 167, 181, 199 *EJECT 189 *EXIT 76 *IJS 93 *INSERT 187 *LIB 74 *LINK 39, 137, 140, 208 *LNK 128 *LOAD 76 *NETATR 101 *NONE 233 *NOTIFY 232, 241, 282 *PUBLIC authority 101 *REGMED 14 *RENAME 112 *REPORT, recovery report 192 *RESUME, recovery 192 *SAVCFG 39 *SAVSECDTA 39 *SBMJOB 232, 282 *SBMJOB, retrieve 282 *SYNCLIB 74 *SYSDFN 75 *SYSGEN 169 *SYSGRP 38 *SYSPCY 31 *USRIDX 13 *USRQ 13 *USRSPC 13 *VERIFY 231, 239, 241, 280 *VERIFY mode, retrieve 239 *VERIFY, retrieve 280 *VOLID 170 Numerics 3490 178 3494 151 *NOSHARE 152 *SHARE 152 3490 178 3590 178 ADDMLD 173 alternate IPL 153 categories 152 connection considerations 151 demount volume 162 DLTDEVMLB 173 ENDMLD 173 enhancements in RISC 177 LAN 151 LAN configuration 171 Library Manager 160, 197 managing multiple devices 177 media library device driver (MLDD) 157 mount cartridge in convenience I/O station 162 mount volume 161 multiple systems attachment 152 ONLINE(*NO) 169 ONLINE(*YES) 169 reset mode 164 restricted state 189 RMVMLD 173 RS232 151 RS232 configuration 170 selecting devices 179 stand-alone mode 160, 197 storage cell 188 system attachment 151 vary on devices 179 3494 LAN configuration 171 3570 alternate IPL 156 convenience slot 155 FULIC 156 Generate cartridge identifier field 170 import/export 155 priority slot 155 random mode 155 3570 tape library 155 3590 178 alternate IPL 154 FULIC 154 generate cartridge identifier 170 MULIC 154 3590 tape library 154 9427 alternate IPL 154 sequential mode 154 9427 tape library 153 A abnormal termination, control group 47, 54 access paths 35 archive 237, 274 rebuild times 237 active data sets 264 Add Job Schedule Entry 277 Add Media Information 45 Add Media Library Device 165 Add Media Library Media to BRM 43 324 Backup Recovery and Media Services for OS/400 Add Media to BRM 20, 43 Add Object List, archive 269 Add Server Storage Link 146 adding a tape cartridge 189 adding media information 47 adding systems to network group 104 ADDJOBSCDE 277 ADDMEDBRM 20, 43, 184 ADDMEDIBRM 45 ADDMLD 165, 173 ADDMLMBRM 43, 184 ADDNWSSTGL 146 ADDTAPCTG 189 ADSM 123 ADSTAR Distributed Storage Management 123 aged file member, archive 223 allocate resources 174 allocated, status 175 allocation status 175 allocated 175 deallocated 175 stand-alone 175 unprotected 175 Allow object differences parameter 200, 279 alternate IPL 3494 153 3570 156 3590 154 9427 154 alternate IPL device 198 ALWOBJDIF 200, 279 retrieval 287 APARs II08968 68 II09313 127 II09475 209 II09724 177 II09772 11, 209 APPC 101 Append to media parameter 35, 71 APPEND(*NO) 35 APPEND(*YES) 28, 71 APPEND(*YES), control group 186 appending to media rules 44 application design 245 considerations 245 application swapping 224 applications, interactive 238 applying journal changes 235 APPN 101 archive 2 access paths 274 application design 245 application suitability 245 apply journal changes 235 by BRMS/400 217 candidates 275 considerations 217 database file members 223 date for *FILE objects 273 default weekly activity 274 direct tape I/O 260 double save 221 implementation 261 inactivity limit 273 include criteria 275 key factors 240 logical files 225 media class 270 media policy 271 member level 245 move policy 270 normal aged file member 223 object types 261 pseudo record-level 254 saving access paths 237 source file members 224 storage freed 274 tape duplication 227 tape duplication process 227 archive candidates active data sets 264 data file with transaction based members 262 data file with transaction based records 262 digital libraries 263 disused test data 261 historical data 263 mixed characteristic data files 263 outfile data 261 random access data files 262 source files 263 statistically random access data files 262 temporary data files 261 archive control group 275 archive lists 217, 268 archive policy 273 archive structure, BRMS/400 267 archive with Dynamic Retrieval 268 archive, without storage freed 219 STG(*FREE), archive 219 archived files 243 archived libraries 244 archived object 243 AS/400 home page 209 ASP overflow 242 attributes, control group 42, 69 authority *PUBLIC 101 LAN Server/400 128 retrieve 286 authority for IFS, save 127 auto enroll media 22, 64, 213 automated tape library 3494 151, 165 3570 155 3590 154 9427 153 allocate resources 174 change device description, CISC 173 change device description, RISC 174 Index 325 change media library device description 172 implementation 165 managing cartridges 183 mode 211 non-barcode 170, 183 random mode 187 software support 157 automatic configuration 167 auxiliary storage pool (ASP) 242 B backup 2 *ALLDLO 39 *ALLPROD 39, 66 *ALLUSR 39, 66 *LINK 39 *SAVCFG 39 *SAVSECDTA 39 access paths 237 control groups 36 media information 70 save files 58 STG(*FREE) 218 storage free 218 backup activity 37 Backup Activity Report 59 backup devices 69 backup lists 37 backup policy 31 append to media 35 default weekly activity 34 incremental type 34 omit libraries 35 save access paths 35 batch applications, retrieve mode 239 BLK001 cartridge identifiers 184 BRMS/400 archive 217 archive lists 268 archive object list 269 archive, double save 221 archived file members 243 archived files 243 archived object move 243 archived object rename 243 auto enroll media 213 console monitor 87 control groups 3, 36 daily checks 58, 64 daily tasks 57 disable, for upgrades 211 disaster recovery for IFS 147 DUPMEDBRM 228 Dynamic Retrieval 230, 268, 277 enable, after upgrades 211 enhancements 3, 7, 17, 57 hierarchical storage management 217, 267 IFS 137 implementing 17 initialize environment 14 installation 11 installation planning 7 Integrated PC Server 134 introduction 1 job scheduling 91 joining networks 118 LAN server configuration 127 license information 13, 212 limitations 136 logs 58, 59 maintenance 51, 58 managing IFS saves 140 media 8 media management 63 media naming convention 8 media policy 28 menus and commands 15 move policy 26 network communications 101 networking 97 new tapes 188 operational tasks 57 overview 1 planning upgrades 209 policies 3, 31 preparation for upgrades 209 recovery 191 register media 14 reports 15, 51, 59 restoring a storage space 145 restoring data 55 restoring IFS directories 142 restoring V3R1 IFS data 146 resynchronizing after an upgrade 215 save-while-active 72 saving IFS 137 saving recovery data 193 saving user information 211 saving user information for upgrades 211 saving V3R1 IFS data 146 scratch pool 8 setting up for IFS 138 setting up with Dynamic Retrieval for archive 268 spooled files 84 stopping for upgrades 211 structure, archive retrieve 267 updating device information 181 upgrades to PowerPC AS 209 user information, save 211 verify network 120 C CA/400 file transfer 233 retrieval 233 cartridge identifier 184 cartridges export 189 importing 187 managing 183 missing 188 326 Backup Recovery and Media Services for OS/400 category, *INSERT 187 central point, recovery 194 Centralized Media Audit Report 59 Change Job Scheduler 93 Change License Information 13, 212 Change Network Attribute 101 Change Presentation Controls 33 Change Presentation Controls display 33 changing media library device descriptions 172 changing the device description CISC 173 RISC 174 changing the system name 113 check availability, media 57 Check Expired Media for BRM 50, 63 checking the BRMS/400 network 120 CHGLICINF 13 CHGNETA 101 CHGSCDBRM 93, 277 CHKEXPBRM 50, 63 CISC, changing the device description 173 Client Access/400 146 commands ADDJOBSCDE 277 ADDMEDBRM 20, 43 ADDMEDIBRM 45 ADDMLD 165 ADDMLMBRM 43, 188 ADDNWSSTGL 146 ADDPFM 234 ADDTAPCTG 189 CHGLICINF 13 CHGNETA 101 CHGOBJAUD 234 CHGOBJD 234 CHGOBJOWN 234 CHGPF 234 CHGPFM 234 CHGSCDBRM 93, 277 CHKEXPBRM 50, 63 CHKOBJ 234 CPYMEDIBRM 97, 108 CRTDEVMLB 166, 168 CRTNWSD 125 DLTF 235 DSPBKUBRM 55 DSPDEVSTS 166 DSPHDWRSC 167 DSPLANMLB 171 DSPLOG 235 DSPLOGBRM 50, 58, 283 DSPNETA 101 DSPOBJD 234 DSPTAP 46 DSPTAPSTS 159 DUPMEDBRM 60, 228 EDTRBDAP 35 EXTMEDIBRM 45 INZBRM 14, 106, 108, 167, 181 INZMEDBRM 45 INZMLD 165 INZTAP 45 MONSWABRM 73, 79 MOVMEDBRM 28, 52, 60, 189 MOVOBJ 234 PRTMEDBRM 60 RCLSTG 235 RMVJRNCHG 235 RMVM 234 RMVTAPCTG 189 RNMM 234 RNMOBJ 234 RSMRTVBRM 232, 278, 283 RST 124 RSTAUT 220 RSTOBJBRM 219 SAV 124 SAVMEDIBRM 58, 193 SAVSAVFBRM 36, 58, 193 SAVSYSBRM 69 SBMNWSCMD 130 SETRTVBRM 231, 277 STRARCBRM 276 STRBKUBRM 50 STREXPBRM 26 STRJRNAP 234 STRJRNPF 234 STRMNTBRM 26, 52, 232 STRRCYBRM 52, 191 STRSST 174 VFYMOVBRM 61 WRKCFGL 101 WRKCFGSTS 166 WRKCLSBRM 24 WRKCLSBRM *MED 270 WRKCTLGBRM 37, 66 WRKCTLGRP *ARC 275 WRKDEVBRM 21 WRKJOBSCDE 277 WRKLBRM 128 WRKLNKBRM 140 WRKLOCBRM 19 WRKMEDBRM 43 WRKMEDIBRM 53, 70 WRKMLBBRM 23 WRKMLBSTS 159, 176 WRKMLMBRM 159, 183 WRKNWSSSN 133 WRKOBJBRM 205, 219 WRKPCYBRM 214 WRKPCYBRM *MED 273 WRKPCYBRM *MOV 270 WRKPCYBRM *RTV 277 WRKREGINF 12 WRKSPLFBRM 85 WRKTAPCTG 183 completing recovery 201 concept of Dynamic Retrieval 225 considerations, Dynamic Retrieval 286 console monitor 87 Index 327 security 90 Console Monitor function 49 container classes 25 control group 2, 3, 18, 36 abnormal termination 47, 54 archive 275 attributes 42, 69, 214, 217 backup devices 69 backup lists 84 backup media information 70 copying 119 end option 186 list entry 128 media information 47, 54 media policy 69 omit libraries 68 recovering a specific one 192 resume recovery, *RESUME 192 save V3R1 IFS data 146 set up 64 signoff interactive users 69 spooled files 84 SWA message queue 73 text 72 user exits 67 with APPEND(*YES) 186 controlling retrieve operations 283 convenience I/O station, 3494 162 convenience slot 155, 156, 187 Copy Media Information using BRM 97, 108 copy, media management files 108 copying control group 119 CPYF 233 CPYF, retrieve 233 CPYMEDIBRM 97, 108 files copied 108 Create Device Description for Media Library 168 Create Device Media Library 166 creating a LAN configuration, 3494 171 creating a network server description 125 creating an RS232 configuration, 3494 170 CRTDEVMLB 166, 168 CRTDUPOBJ 233 CRTDUPOBJ, retrieve 233 CRTNWSD 125 CRTxxxPGM 234 current allocation 176 D daily checks, BRMS/400 58, 64 data file utility (DFU) 233 data file with transaction based members 262 data file with transaction based records 262 database horizontal splitting 250, 253 normalization 246 splitting 248 vertical splitting 248 database file members, archive 223 Database Open 233 date, archive 273 DDM 25, 105 deallocated status 175 default control groups 38 default job wait time 169 Default weekly activity parameter 274 deleting a media library device driver 216 demount volume, 3494 162 design considerations, application 245 device location 181 device pooling 277 device selection, 3494 179 DFU (data file utility) 233 retrieval 233 digital libraries 263 direct tape input/output 260 directory information 144 disabling BRMS/400 for upgrades 211 disaster recovery for IFS 147 disk space, double save 222 Display Backup Plan 55 Display Hardware Resource 167 Display LAN Media Library 171 Display Log using BRM 50 Display Network Attribute 101 Display Tape Status 159 disused test data 261 DLTDEVMLB 173 dormancy level 264, 273 dormant 223 dormant file member 223 double save disk space 222 journal entries 222 object locks 222 performance 222 DSPBKUBRM 55 DSPHDWRSC 167 DSPLOGBRM 50, 58, 140, 207 DSPLOGBRM *RTV 283 DSPNETA 101 DSPPFM 233 DSPPFM, retrieve 233 DSPTAP 46 DSPTAPSTS 159, 166 Duplicate Media using BRM 60, 228 duplication, archive tapes 227 DUPMEDBRM 60, 228 Dynamic Retrieval 230, 261 BRMS/400 230, 268, 277 concept 225 considerations 286 implementation 264 methods 231 object types 261 E Edit Rebuild Access Path 35 EDTRBDAP 35 End Media Library Device 165 328 Backup Recovery and Media Services for OS/400 end option *LEAVE 186 *REWIND 186 *UNLOAD 186 End option (ENDOPT) setting 185, 278, 279 ENDMLD 165, 173 ENDOPT setting 185 ENDOPT(*LEAVE) 186 ENDOPT(*REWIND) 186 ENDOPT(*UNLOAD) 186 Enhanced Upgrade Assistant tool 211 enhancements, BRMS/400 7, 17, 57 enroll media, automatically 64 enrolled tapes 188 re-activating 188 enrolling media 43 enrolling new tapes 188 exclusive locks 72 exit program 12 expire volumes 20 exporting cartridges 189 EXTMEDIBRM 45 extracting media information 45 F failed retrieve operations 283 file member, renaming 243 file renaming 243 file size 237 file size, retrieve performance 237 files copied, CPYMEDIBRM 108 files, media management 108 fragmentation 237 FSIOP 39, 123 FULIC 154, 195 functional enhancements 3, 7, 17, 57 G Generate cartridge identifiers field 170 grant authority, LAN Server/400 130 H hardware data compression (HDC) 301 hardware resource manager (HRM) 201 HDC (hardware data compression) 301 hierarchical storage management 217 planning 217 using BRMS/400 267 hints, save and restore 147 historical data 263 home location 19 horizontal splitting, database 250, 253 HRM (hardware resource manager) 201 HSM (hierarchical storage management) 217 I IFS 137 authority 127 BRMS/400 limitations 136 disaster recovery 147 hints 147 LAN Server 123 managing saves using BRMS/400 140 memory requirements 127 overview 123 planning for saving directories 124 recovery 208 restore 123 restore directories with BRMS/400 142 save recommendations 136 save strategy 136 saving 123 saving using BRMS/400 137 setting up BRMS/400 138 V3R1 data 146 IFS directories 142 II08968 68 II09313 127 II09475 209 II09724 177 II09772 11 II09992 209 IMP001 cartridge identifiers 184 implementation, save-while-active 73 implementing an archive 261 implementing BRMS/400 17 implementing Dynamic Retrieval 264 import/export, 3570 155 importing cartridges 187 inactivity limit 273 include criteria 275 Initialize BRM 14, 199 Initialize Media Library Device 165 Initialize Tape 45 initializing media 43 initializing the BRM environment 14 installation planning 7 installing BRMS/400 11 integrated file system 123 Integrated PC Server 39, 123 integration Lotus Notes 123 Novell NetWare 123 interactive applications 238 invoking retrieval 233 INZBRM 14, 167, 181, 199 INZBRM *NETSYS 106, 108 INZBRM *NETTIME 110 INZMEDBRM 45 INZMLD 165 IPL device, alternate 198 J job priority 185 job queue processing 40 job queues to hold 297 job queues to process 297 job scheduler, OS/400 93 Index 329 job scheduling 91 join logical files 226 performance 226 joining BRMS/400 networks 118 journal changes 233 apply 235 retrieve 233 journal entries, double save 222 K key factors archive 240 retrieval 240 L LAN configuration, 3494 171 LAN server 211 LAN Server/400 administrator 128 authority 128 disaster recovery 147 grant authority 130 group profile 131 performance 135 save and restore options 148 saving AS/400 configuration information 127 saving the LAN server configuration 127 saving user data 127 special authority 131 storage space 125 structure 125 user data 127 users with password *NONE 131 libraries, synchronization 76 library considerations, BRMS/400 65 Library Manager for the 3494 160 library mode, save 211 library renaming 244 LIC 195 license information 13, 212 Licensed Internal Code 195 recovery 197 LINKLIST 137, 140, 208 list entry 128 local location 113 location, secure 101 locations *HOME 18 *VAULT 18 device 181 home 19 media policy 30 multiple devices, 3494 178 storage 18 locks, exclusive 72 logical end of volume 294, 298 logical file 237 logical files 225 archive 225 retrieve 225 logical, multiple physical files 237 Lotus Notes integration 123, 211 M management, hierarchical storage 217 managing cartridges 183 managing IFS saves 140 managing multiple devices, 3494 177 MAXDEVTIME 169, 175, 185 maximum device wait time 169, 175, 185 media 2 archive policy 273 auto enroll 64 check availability 57 classes 24 devices 21 enrolling 43 initializing 43 management 63 media management files 108 movement 194 naming convention 8 register 14 scratch pool 8 security 45 slotting 20 types 10 media and storage extensions (MSE) 12, 230 media class 18, 24 for archive 270 media devices 21 media history information 47, 54 media information 58, 70, 105 receive 105 remove 112 media libraries, third-party 23 media library device driver (MLDD) 12, 157, 216 media maintenance 193 media management 63 media management files 108 media movement 60, 193, 194 Media Movement Report 63 media policy 18, 28 archive 271 defaults 29 media security 45 media slotting 20 media synchronization 120 media types 10 member level archive 245 member level changes 236 memory requirements, IFS 127 menus and commands, BRMS/400 15 message queue, SWA 73, 80 methods to retrieve 231 missing cartridges 188 mixed characteristic data files 263 MLDD (media library device driver) 12, 157, 216 deleting libraries 216 330 Backup Recovery and Media Services for OS/400 MLDD commands ADDMLD 173 DLTDEVMLB 173 ENDMLD 173 RMVMLD 173 upgrade to PowerPC AS 213 mode blank 101 QBRM 101 monitor job, ending example 133 monitor save while active for BRM 73 MONSWABRM 79 mount volume, 3494 161 move archived object 243 Move Media using BRM 28, 184 move policy 26 archive 270 movement report 63 moving media 60 MOVMEDBRM 28, 52, 60, 189 MQSeries 211 MSE, see media and storage extensions 12, 230 MULIC 154, 195 multi-format files, performance 226 multiple devices, 3494 177 multiple physical files 237 multiple systems attachment, 3494 152 N naming convention 8 network communications 101 network drive 136 network file transfer 234 network group, removing a system 111 network security 101 network server description 125 network server storage space 125 networking, BRMS/400 97, 194 new tapes, enroll 188 non-barcode libraries 170 non-barcode reader library 183 normalization, database 246 Novell NetWare integration 123 O object description, retain 274 object locks, double save 222 object retention 279 value 233 object retrieval 241 object size 241 object types 261 ObjectConnect 50 omit libraries 35 ONLINE(*NO), 3494 169 ONLINE(*YES), 3494 169 Open Query File (OPNQRYF) 233 operations that do not invoke retrieval 234 operations that invoke retrieval 233 OPNQRYF, retrieve 233 optimum block size 23 option (ENDOPT) setting, end 185 OS/400 job scheduler 93 OS/400 recovery 197 outfile data 261 overview of BRMS/400 1 overview of IFS 123 P performance double save 222 file size 237 join logical files 226 LAN Server/400 135 multi-format files 226 multiple physical files 237 retrieve 237 planning for saving IFS 124 planning for upgrades to PowerPC AS 209 policies, BRMS/400 3, 31 policy archive 273 media 28 move 26 retrieve 231, 277 predicting object size 241 time 242 preparation for the recovery 195 Print Media Exceptions for BRM 60 print recovery report 192 priority slot 187 3570 155 priority, job 185 private authorities 220 processing job queue 40 subsystem 41 PRTMEDBRM 60 pseudo record-level archiving 254 Q Q1ABRMNET subsystem 98 Q1APRM 51 QA1ANET file 210 QAUTOCFG 167, 168 QAWUSRDMN 13 QBRM mode 101 QBRMS user profile 100, 109 QDLS 124 QNetWare 211 QPGMR user profile 101 QSYSLIBL 12 Query/400 233 Query/400, retrieve 233 QUSER user profile 101 Index 331 R random access data files 262 random mode 3570 155 tape libraries 187 re-activating enrolled tapes 188 re-archiving retrieved objects 229 Receive media information parameter 194 receiving media information 105 record level archive 254 record time stamp 259 recovering a specific control group 192 recovering an entire system 195 recovering Licensed Internal Code 197 recovering OS/400 197 recovering specific objects 205 Recovering Your Entire System report 195 recovery 2 BRMS/400 191 HRM 201 SRM 200 user profiles 199 recovery from a central point 194 recovery preparation 195 recovery report 53 *REPORT 192 printing 192 recovery steps 198 Recovery Volume Summary Report 195 re-inventory, libraries 188 Remove Journal Changes 235 Remove media information field 112 Remove Tape Cartridge 189 removing a system from the network group 111 renaming an archived object 243 renaming file members 243 renaming files 243 renaming libraries 244 report BRMS/400 15 centralized media audit 59 requested 176 resource allocation 176 restarting control group save 47, 54 restore 55 restore authority (RSTAUT) 220 restore IFS directories 142 Restore Object (RST) 124 Restore Object using BRM 219 restore options, retrieving 287 restoring IFS directories 142 restoring V3R1 IFS data 146 restricted state 3494 automation 189 AS/400 148 FSIOP 132 Integrated PC Server 132, 133, 148 SAVSYS recovery 197 Resume Retrieve using BRM 232, 278, 283 resynchronizing BRMS/400 after an upgrade 215 Retain object description parameter 274 retrieval *DELAY 232, 241, 283 *NONE 233 *NOTIFY 232, 241, 282 *SBMJOB 232, 282 *VERIFY 231, 239, 241, 280 allow object differences 279 application design 245 ASP overflow 242 CA/400 file transfer 233 CPYF 233 CRTDUPOBJ 233 CRTxxxPGM 234 Database Open 233 DFU 233 DSPPFM 233 file size performance 237 interactive applications 238 journal changes 233 key factors 240 logical files 225 member level changes 236 multiple physical files 237 network file transfer 234 object retention 279 object size 241 object types 261 operations that invoke 233 performance 237 Query/400 233 responses 280, 282, 283 SAVxxx ACCPTH(*YES) 234 SQL/400 233 time 242 retrieval considerations 230 retrieve 2 retrieve authority 286 Retrieve authorization parameter 278 retrieve confirmation 278 retrieve mode 238 batch applications 239 for batch jobs 239 interactive applications 238 retrieve objects, re-archive 229 retrieve policy security 287 setup 277 retrieve structure, BRMS/400 267 retrieved objects 229 retrieving objects 241 RISC, changing the device description 174 RMVJRNCHG 235 RMVMLD 173 RMVTAPCTG 189 ROBOTDEV 199 RS232 configuration using the 3494 170 RSMRTVBRM 232, 278, 283 in batch 285 interactively 286 332 Backup Recovery and Media Services for OS/400 RST 124, 146 RSTAUT (restore authority) 220 RSTOBJBRM 219 rules, appending to media 44 S SAV 124, 146 Save access paths parameter 274 save and restore hints 147 save authority for IFS 127 save files 58, 193 save object (SAV) 124 save recommendations for IFS 136 Save Save File using BRM 36, 193 Save Strategy Exceptions Report 60 save strategy for IFS 136 save with storage freed 217, 218 saves, unattended 49 save-while-active 72 save-while-active implementation 73 Save-while-active parameters 74 saving BRMS/400 recovery data 193 saving IFS using BRMS/400 137 saving LAN Server/400 user data 127 saving media information 58 saving spooled files 84 saving user information for upgrades 211 saving V3R1 IFS data 146 SAVLIBBRM 69 SAVMEDIBRM 58 SAVSAVFBRM 36, 58 SAVSYS 87, 160, 197 SAVSYSBRM 69 SAVxxx ACCPTH(*YES) 234 SBMNWSCMD 130 scheduling jobs 91 archive 276 scratch pool 2, 8 scratch volumes 50 SDC (system data compression) 301 secure location 101 secure retrieve policy 287 securing the retrieve policy 287 security, network 101 selecting devices, 3494 179 sequential mode, 9427 154 server storage space 125 Set Retrieve Controls for BRM 277 Set Retrieve using BRM 231 set up retrieve policy 277 SETRTVBRM 231, 277 setting up BRMS/400 for IFS 138 setting up the tape device for SAVSYS recovery 197 setting, end option (ENDOPT) 185 SHARE(*NO) 49 SHARE(*YES) 49 shared device support 22, 213 shared media 24, 25 shared media library 25 side-by-side, resynchronizing 215 sign-off interactive users 69 source file members, archive 224 source files 263 special authority 131 special cartridge identifiers 184 splitting database 248 spooled files, save 84 SQL/400 233 SRM (system resource management) 200 SST (System Service Tools) 174 stand-alone device mode 160 stand-alone mode, 3494 160 stand-alone status value 175 stand-alone tape resource 190 Start Archive using BRM 276 Start Backup using BRM 50 Start Expiration using BRM 26 Start Maintenance BRM 26, 52 Start Recovery using BRM 52, 191 statistically random access data files 262 STG(*FREE) 218 stopping BRMS/400 for upgrades 211 storage freed 218 archive 274 storage location 9, 18, 30 storage management, hierarchical 217 storage space 124, 125 storage space restore 145 STRARCBRM 276 STRBKUBRM 50 STREXPBRM 26 STRMNTBRM 26, 52, 58, 232 walk-through 58 STRRCYBRM 52, 191 structure, LAN Server/400 125 Submit Network Server Command (SBMNWSCMD) 130 subsystem processing 41 subsystem, Q1ABRMNET 98 subsystems to end 296 subsystems to process 296 suitable application, archive 245 support home page 209 SWA message queue 73, 80 swapping, application 224 synchronization 72 synchronizing libraries 76 media maintenance 193 media movement 193 recovery report 193 system data compression (SDC) 301 system name change 113 system policy 31 automatically backup media information 33 day start time 32, 33 presentation controls 32, 33 system recovery 195 SAVSYS 160 system recovery report 191 system resource management (SRM) 200 Index 333 System Service Tools (SST) 174 system value QALWUSRDMN 13 QSYSLIBL 12 T tape duplication, archive 227 tape file I/O 260 tape resource 213 Tape Volume Report 60 temporary data files 261 text, control group 72 third-party media libraries 23 time stamp, record 259 time when retrieving objects 242 transaction based members, data file 262 transaction based records, data file 262 U unattended saves 49 unprotected status 175 update history 222 updating device information, BRMS/400 181 upgrade delete MLDD 216 planning 209 preparation 209 restart BRMS/400 211 resynchronize 215 save user information 211 stop BRMS/400 211 Upgrade Assistant tool 211 usage limit 14 user exits 67 user index 13 user information 211 user profile QBRMS 100 QPGMR 101 QUSER 101 recovery 199 user queue 13 user space 13 users with password *NONE 131 using a stand-alone tape resource 190 V V2R3 5, 210 V3R0.5 5, 210 V3R1 IFS data 146 vary off FSIOP 134 Integrated PC Server 134 vary on FSIOP 134 Integrated PC Server 134 vary on devices 179 verify moves 27 Verify Moves using BRM 61 verifying media movement 27 Verifying moves parameter 27 verifying the BRMS/400 network 120 vertical data splitting 248 VFYMOVBRM 27, 61 VOL(*MOUNTED) 185 volume identifiers 8 Volume Statistics Report 60 Volume Threshold Report 60 W wait time, default 169 Work with Configuration List 101 Work with Configuration Status 166 Work with Control Groups 37, 66, 275 Work with Devices using BRM 21 Work with Job Schedule Entries 277 Work with Library Media using BRM 159 Work with Link Information BRM 140 Work with Lists using BRM 128 work with media BRM 43 Work with Media Classes 24, 270 Work with Media Information 53, 70 Work with Media Libraries 23 Work with Media Library Media 183 Work with Media Library Status 159, 176 Work with Media Policies 214 Work with Move Policies 214, 270 Work with NWS Sessions 133 Work with Object using BRM 205 Work with Registration Information 12 Work with Saved Objects 219 Work with Spooled Files for BRM 85 Work with Storage Locations 19 Work with Tape Cartridges 183 WRKCFGSTS 166 WRKCLSBRM 24 WRKCTLGBRM 37, 66, 275 WRKDEVBRM 21 WRKJOBSCDE 277 WRKLBRM 128 WRKLNKBRM 145 WRKLOCBRM 19 WRKMEDBRM 43 WRKMEDIBRM 53, 70 WRKMLBBRM 23 WRKMLBSTS 159, 175, 176 WRKMLMBRM 159 WRKNWSSSN 133 WRKOBJBRM 205, 219 WRKPCYBRM *MED 214 WRKPCYBRM *MOV 214 WRKPCYBRM *RTV 277 WRKREGINF 12 WRKSPLFBRM 85 334 Backup Recovery and Media Services for OS/400 © Copyright IBM Corp. 1997, 2001 335 IBM Redbooks review Your feedback is valued by the Redbook authors. In particular we are interested in situations where a Redbook "made the difference" in a task or problem you encountered. Using one of the following methods, please review the Redbook, addressing value, subject matter, structure, depth and quality as appropriate. • Use the online Contact us review redbook form found at ibm.com/redbooks • Fax this form to: USA International Access Code + 1 845 432 8264 • Send your comments in an Internet note to redbook@us.ibm.com Document Number Redbook Title SG24-4840-01 Backup Recovery and Media Services for OS/400: A Practical Approach Review What other subjects would you like to see IBM Redbooks address? Please rate your overall satisfaction: O Very Good O Good O Average O Poor Please identify yourself as belonging to one of the following groups: O Customer O Business Partner O Solution Developer O IBM, Lotus or Tivoli Employee O None of the above Your email address: The data you provide here may be used to provide you with information from IBM or our business partners about our products, services or activities. O Please do not use the information collected here for future marketing or promotional contacts or other communications beyond the scope of this transaction. Questions about IBM’s privacy policy? The following link explains how we protect your personal information. ibm.com/privacy/yourprivacy/ (0.5” spine) 0.475”<->0.873” 250 <-> 459 pages Backup Recovery and Media Services for OS/400: A Practical Approach ® SG24-4840-01 ISBN 0738422231 INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks Backup Recovery and Media Services for OS/400 A Practical Approach Concepts and tasks to implement BRMS for OS/400 on AS/400e servers Tips and techniques to make your BRMS implementation run smoother Best practices for media and tape management This IBM Redbook preserves the valuable information from the first edition of A Practical Approach to Managing Backup Recovery and Media Services for OS/400, SG24-4840, which is based on CISC implementations. The updates in this edition were made to reflect the documentation and URL values that were available at the time of publication. This publication is unique in its detailed coverage of using BRMS/400 with tape libraries within a single AS/400 CISC system, or within multiple AS/400 CISC configurations across multiple levels of OS/400 ranging from OS/400 V3R1 to and through OS/400 V3R7. Coverage for BRMS for OS/400 for RISC and iSeries systems will be found in a redpaper that is planned for publication in 2001. This redbook focuses on the installation and management of BRMS/400 using tape libraries such as IBM 9427, IBM 3494, IBM 3570, and IBM 3590. It provides implementation guidelines for using BRMS/400 to automate your save, restore, archive, and retrieve operations. It also contains practical examples of managing your media inventory across multiple AS/400 CISC systems. This redbook also identifies functional differences between BRMS/400 and OS/400 CISC releases, where appropriate.

ibm.com/redbooks IBM Advanced Functions and Administration on DB2 Universal Database for iSeries Hernando Bedoya Daniel Lema Vijay Marwaha Dave Squires Mark Walas Learn about referential integrity and constraints See how Database Navigator maps your database Discover the secrets of Visual Explain International Technical Support Organization Advanced Functions and Administration on DB2 Universal Database for iSeries December 2001 SG24-4249-03 © Copyright International Business Machines Corporation 1994, 1997, 2000, 2001. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Fourth Edition (December 2001) This edition applies to Version 5, Release 1 of OS/400, Program Number 5722-SS1. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Take Note! Before using this information and the product it supports, be sure to read the general information in “Special notices” on page 345. © Copyright IBM Corp. 1994, 1997, 2000, 2001 iii Contents Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Special notice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi IBM trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Part 1. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introducing DB2 UDB for iSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 An integrated relational database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 DB2 UDB for iSeries: An overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 DB2 UDB for iSeries basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.2 DB2 UDB for iSeries advanced functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 DB2 Universal Database for iSeries sample schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 2. Using the advanced functions: An Order Entry application. . . . . . . . . . . . 11 2.1 Introduction to the Order Entry application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Order Entry application overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Order Entry database overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 DB2 UDB for iSeries advanced functions in the Order Entry database . . . . . . . . . . . . 17 2.4.1 Referential integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.2 Two-phase commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Part 2. Advanced functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Chapter 3. Referential integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2 Referential integrity concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3 Defining a referential integrity relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.1 Constraint prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.2 Journaling and commitment control requirements . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.3 Referential integrity and access paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Creating a referential constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.1 Primary key and unique constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4.2 Referential constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4.3 Another example: Order Entry scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.4 Self-referencing constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.5 Constraints enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5.1 Locking considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5.2 Referential integrity rules ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.5.3 A CASCADE example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.6 Journaling and commitment control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.6.1 Referential integrity journal entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.6.2 Applying journal changes and referential integrity . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7 Referential integrity application impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.7.1 Referential integrity I/O messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.7.2 Handling referential integrity messages in applications . . . . . . . . . . . . . . . . . . . . 51 iv Advanced Functions and Administration on DB2 Universal Database for iSeries 3.8 Referential integrity constraint management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.8.1 Constraint states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.8.2 Check pending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8.3 Constraint commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8.4 Removing a constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.8.5 Save and restore considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.8.6 Restore and journal apply: An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.8.7 Displaying constraint information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Chapter 4. Check constraint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1.1 Domain or table constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1.2 Referential integrity constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1.3 Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2 DB2 UDB for iSeries check constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.3 Defining a check constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.4 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.5 Check constraint integration into applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.5.1 Check constraint I/O messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.5.2 Check constraint application messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.6 Check constraint management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.6.1 Check constraint states. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.6.2 Save and restore considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.7 Tips and techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Chapter 5. DRDA and two-phase commitment control . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.1 Introduction to DRDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.1.1 DRDA architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.1.2 SQL as a common DRDA database access language . . . . . . . . . . . . . . . . . . . . . 84 5.1.3 Application requester and application server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.1.4 Unit of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.1.5 Openness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.2 Comparing DRDA-1 and DRDA-2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3 DRDA-2 connection management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Connection management methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.2 Connection states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.4 Two-phase commitment control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.4.1 Synchronization Point Manager (SPM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.5 DB2 UDB for iSeries SQL support for connection management. . . . . . . . . . . . . . . . . . 92 5.5.1 Example of an application flow using DRDA-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.6 DRDA-1 and DRDA-2 coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.7 Recovery from failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.7.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.7.2 Automatic recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.7.3 Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.8 Application design considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.8.1 Moving from DRDA-1 to DRDA-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.9 DRDA-2 program examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.9.1 Order Entry main program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.9.2 Deleting an order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.9.3 Inserting the detail rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.10 DRDA over TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.10.1 Configuring DRDA over TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Contents v 5.10.2 Examples of using DRDA over TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.10.3 Troubleshooting DRDA over TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.11 DB2 Connect access to an iSeries server via TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . 120 5.11.1 On the iSeries server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.11.2 On the workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.11.3 Consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Chapter 6. DB2 Import and Export utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2 DB2 UDB for iSeries Import utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2.1 CPYFRMIMPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2.2 Data load example (file definition file) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.2.3 Data load example (Data Definition Language) . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.2.4 Parallel data loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.3 DB2 UDB for iSeries Export utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3.1 CPYTOIMPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3.2 Creating the import file (TOFILE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.3.3 Exporting the TOFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 6.3.4 Creating the import file (STMF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.3.5 Exporting the STMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.4 Moving data from DB2 UDB 7.2 to DB2 UDB for iSeries . . . . . . . . . . . . . . . . . . . . . . 149 6.4.1 First approach: Using the Export and Import utilities . . . . . . . . . . . . . . . . . . . . . 149 6.4.2 Second approach: Using Export and CPYFRMIMPF . . . . . . . . . . . . . . . . . . . . . 152 6.5 Moving data from DB2 UDB for iSeries into DB2 UDB 7.2 . . . . . . . . . . . . . . . . . . . . . 152 6.5.1 Using the Import and Export utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 6.5.2 Using the CPYTOIMPF command and the Import utility. . . . . . . . . . . . . . . . . . . 153 Part 3. Database administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Chapter 7. Database administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.1 Database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.1.1 New in V5R1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 7.2 DB2 Universal Database for iSeries through Operations Navigator overview . . . . . . 161 7.2.1 Database functions overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2.2 Database library functions overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.3 Creating an OS/400 library or collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.2.4 Library-based functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.2.5 Object-based functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.3 Run SQL Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 7.3.1 ODBC and JDBC connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 7.3.2 Running a CL command under SQL script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 7.3.3 Run SQL Scripts example using a VPN journal . . . . . . . . . . . . . . . . . . . . . . . . . 208 7.3.4 Run SQL Scripts Run options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 7.3.5 DDM/DRDA Run SQL Script configuration summary . . . . . . . . . . . . . . . . . . . . . 216 7.4 Change Query Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 7.5 Current SQL for a job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 7.6 SQL Performance Monitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 7.6.1 Starting the SQL Performance Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.6.2 Reviewing the SQL Performance Monitor results . . . . . . . . . . . . . . . . . . . . . . . . 226 7.6.3 Importing data collected with Database Monitor . . . . . . . . . . . . . . . . . . . . . . . . . 233 Chapter 8. Database Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 8.1.1 System requirements and planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 vi Advanced Functions and Administration on DB2 Universal Database for iSeries 8.2 Finding Database Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 8.3 Finding database relationships prior to V5R1M0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 8.4 Database Navigator maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.5 The Database Navigator map interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 8.5.1 Objects to Display window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 8.5.2 Database Navigator map display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 8.6 Available options on each active icon on a map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 8.6.1 Table options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 8.6.2 Index options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 8.6.3 Constraint options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.6.4 View options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.6.5 Journal options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.6.6 Journal receiver options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.7 Creating a Database Navigator map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 8.7.1 Adding new objects to a map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 8.7.2 Changing the objects to include in a map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 8.7.3 Changing object placement and arranging object in a map . . . . . . . . . . . . . . . . 265 8.7.4 Creating a user-defined relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 8.8 The Database Navigator map icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Chapter 9. Reverse engineering and Generate SQL . . . . . . . . . . . . . . . . . . . . . . . . . . 271 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 9.1.1 System requirements and planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 9.1.2 Generate SQL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 9.2 Generating SQL from the library in Operations Navigator. . . . . . . . . . . . . . . . . . . . . . 276 9.2.1 Generating SQL to PC and data source files on the iSeries server . . . . . . . . . . 281 9.2.2 Generating SQL from the Database Navigator map . . . . . . . . . . . . . . . . . . . . . . 289 9.2.3 Generating SQL from DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Chapter 10. Visual Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 10.1 A brief history of the database and SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 10.2 Database tuning so far . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 10.2.1 Query optimizer debug messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 10.2.2 Database Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 10.2.3 The PRTSQLINF command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 10.2.4 Iterative approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 10.3 Introducing Visual Explain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 10.3.1 What is Visual Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 10.3.2 Finding Visual Explain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 10.3.3 Data access methods and operations supported . . . . . . . . . . . . . . . . . . . . . . . 305 10.4 Using Visual Explain with the SQL Script Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 10.4.1 The SQL Script Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 10.4.2 Visual Explain Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 10.4.3 Run and Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 10.5 Navigating Visual Explain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 10.5.1 Menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 10.5.2 Action menu items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 10.5.3 Controlling diagram level of detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 10.5.4 Displaying the query environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 10.5.5 Visual Explain query attributes and values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 10.6 Using Visual Explain with Database Monitor data. . . . . . . . . . . . . . . . . . . . . . . . . . . 318 10.7 Non-SQL interface considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 10.7.1 Query/400 and Visual Explain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Contents vii 10.7.2 The Visual Explain icons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 10.8 SQL performance analysis using Visual Explain. . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 10.8.1 Database performance analysis methodology . . . . . . . . . . . . . . . . . . . . . . . . . 323 Appendix A. Order Entry application: Detailed flow . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Program flow for the Insert Order Header program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Program description for the Insert Order Header program . . . . . . . . . . . . . . . . . . . . . . 330 Program flow for the Insert Order Detail program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Program description for Insert Order Detail program . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Program flow for the Finalize Order program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Program description for the Finalize Order program. . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Appendix B. Referential integrity: Error handling example . . . . . . . . . . . . . . . . . . . . 337 Program code: Order Header entry program – T4249CINS. . . . . . . . . . . . . . . . . . . . . . . . 338 Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . 341 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 viii Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 ix Preface Dive into the details of DB2 Universal Database (UDB) for iSeries advanced functions and database administration. This IBM Redbook aims to equip programmers, analysts, and database administrators with all the skills and tools necessary to take advantage of the powerful features of the DB2 Universal Database for iSeries relational database system. It provides suggestions, guidelines, and practical examples about when and how to effectively use DB2 Universal Database for iSeries. This redbook contains information that you may not find anywhere else, including programming techniques for the following functions:  Referential integrity and check constraints  DRDA over SNA, DRDA over TCP/IP, and two-phase commit  DB2 Connect  Import and Export utilities This redbook also offers a detailed explanation of the new database administration features that are available with Operations Navigator in V5R1. Among the tools, you will find:  Database Navigator  Reverse engineering and Generate SQL  Visual Explain  Database administration using Operations Navigator This redbook is a follow-on from the previous redbook DB2 UDB for AS/400 Advanced Database Functions, SG24-4249-02. With the focus on advanced functions and administration in this fourth edition of the book, we moved the information about stored procedures and triggers into a new redbook – Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. Prior to reading this redbook, you should have some knowledge of the relational database technology and application development environment on the IBM ~ iSeries server. The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO) Rochester Center. Hernando Bedoya is an IT Specialist at the IBM ITSO, in Rochester, Minnesota. He writes extensively and teaches IBM classes worldwide in all areas of DB2 UDB for iSeries. Before joining the ITSO more than one year ago, he worked for IBM Colombia as an AS/400 IT Specialist doing presales support for the Andean countries. He has 16 years experience in the computing field and has taught database classes in Colombian universities. He holds a Masters Degree in computer science from EAFIT, Colombia. His areas of expertise are database technology, application development, and data warehousing. Daniel Lema is an IT Architect at IBM Andean with 13 years of experience. Some of his projects include working with Business Intelligence, ERP implementation, and outsourcing. Previously, he worked as a Sales Specialist for the Midrange Server Product Unit (formerly the AS/400 Product Unit) helping customers and sales people in designing AS/400- and DB2/400-based solutions. He is a lecturer in Information Management and Information x Advanced Functions and Administration on DB2 Universal Database for iSeries Technology Planning in the Graduate School at EAFIT University and other colombian universities. He is also an Information Systems Engineer and working on his project degree for getting his Applied Mathematics Master Degree at EAFIT University in which he has already finished the academic activities. Vijay Marwaha is an IT Specialist with IBM Global Services - Business Innovation Services, in Cranford, New Jersey. He has 30 years experience in the computing field. He has worked with the System/38, AS/400, and iSeries for the last 16 years. His areas of expertise are database design, application design, and development for performance, data warehousing, and availability. He is also a chemical engineer and holds an MBA from the Indian Institute of Management Calcutta. David F Squires is an IT Specialist at the Technical Support center in the UK. He is a Level 2 Operations Specialist who deals with database and Main Storage issues. He has been working with the AS/400 system since it was announced back in 1988 and continues to work with the iSeries server today. He has more than 15 years experience in the computing field. Mark Walas is the Technical Director of Sierra Training Services Limited in England. Sierra Training is a leading iSeries and AS/400 education provider in the United Kingdom. He is currently responsible for the education strategy and course development of Sierra Training Services. He teaches iSeries and AS/400 courses extensively. He has 23 years of experience in the computing field. This redbook is based on the projects conducted in 1994, 1997, and 2000 by the ITSO Rochester Center. The advisor of the first edition of this redbook was: Michele Chilanti ITSO, Rochester Center The authors of the first edition of this redbook in 1994 were: Thelma Bruzadin, ITEC Brazil Teresa Kan, IBM Rochester Oh Sun Kang, IBM Korea Alex Metzler, IBM Switzerland Kent Milligan, IBM Rochester Clarice Rosa, IBM Italy The advisor of the second and third editions of this redbook was: Jarek Miszczyk ITSO, Rochester Center The authors of the second edition of this redbook in 1997 were: Hernando Bedoya, IBM Colombia Deepak Pai, IBM India The authors of the third edition of this redbook in 2000 were: Christophe Delponte, IBM Belguim Roger H. Y. Leung, IBM Hong Kong Suparna Murthy, IBM India Preface xi Thanks to the following people for their invaluable contributions to this project: Mark Anderson Christopher Brandt Michael Cain Jim Cook John Eberhard Jim Flanagan Mietek Konczyk Kent Milligan Kathy Passe Tom Schrieber IBM Rochester Cintia Marques IBM Brazil Simona Pachiarini IBM Italy Andrew Fellows IBM UK Special notice This publication is intended to help programmers, analysts, and database administrators to implement DB2 UDB for iSeries. The information in this publication is not intended as the specification of any programming interfaces that are provided by DB2 UDB for iSeries. See the PUBLICATIONS section of the IBM Programming Announcement for DB2 UDB for iSeries for more information about what publications are considered to be product documentation. IBM trademarks The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: e (logo)® IBM ® AFP™ AIX® APPN® AS/400® CICS® COBOL/400® DataPropagator™ DB2® DB2 Connect™ DB2 Universal Database™ Distributed Relational Database Architecture™ DPI® DRDA® GDDM® Informix™ iSeries™ MORE™ Redbooks™ Redbooks Logo MVS™ Operating System/400® OS/2® OS/390® OS/400® PartnerWorld® Perform™ RPG/400® SAA® S/390® Sequent® SP™ SP1® SP2® System/36™ TME® Notes® xii Advanced Functions and Administration on DB2 Universal Database for iSeries Comments welcome Your comments are important to us! We want our IBM Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways:  Use the online Contact us review redbook form found at: ibm.com/redbooks  Send your comments in an Internet note to: redbook@us.ibm.com  Mail your comments to the address on page ii. © Copyright IBM Corp. 1994, 1997, 2000, 2001 1 Part 1 Background This part introduces the basics concepts of DB2 Universal Database for iSeries. It provides a description of the Order Entry application used to illustrate the use of the advanced features used in DB2 Universal Database for iSeries. Plus, it describes the sample database provided in DB2 Universal Database for iSeries in V5R1. Part 1 2 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 3 Chapter 1. Introducing DB2 UDB for iSeries This chapter includes:  A general introduction to DB2 UDB for iSeries  An overview on the contents in this redbook  A definition of the sample schema provided in V5R1 1 4 Advanced Functions and Administration on DB2 Universal Database for iSeries 1.1 An integrated relational database Integration has been one of the major elements of differentiation of the iSeries server platform in the information technology marketplace. The advantages and drawbacks of fully integrated systems have been the subject of endless disputes in the last few years. The success of the iSeries server indicates that integration is still considered one of the premier advantages of this platform. Security, communications, data management, backup and recovery. All of these vital components have been designed in an integrated way on the iSeries server. They work according to a common logic with a common end-user interface. They fit together perfectly, since all of them are part of the same software—the Operating System/400 (OS/400). The integrated relational database manager has always been one of the most significant facilities that the iSeries server provides to users. Relying on a database manager integrated into the operating system means that virtually all the user data on the iSeries server is stored in a relational database and that the access to the database is implemented by the operating system itself. Some database functions are implemented at a low level in the iSeries server architecture, while some are even performed by the hardware. A recent survey pointed out that a significant percentage of iSeries server customers do not even know that all of their business data is stored in a relational database. This might sound strange if you think that we consider the integrated database as one of the main technological advantages of the iSeries server platform. On the other hand, this means that thousands of customers use, manage, back up, and restore a relational database every day without even knowing that they have it installed on their system. This level of transparency has been made possible by the integration and by the undisputed ease of use of this platform. These have been key elements of the success of the iSeries server database system in the marketplace. During the last couple of years, each new release of OS/400 has enhanced DB2 UDB for iSeries with a dramatic set of new functions. As a result of these enhancements, the iSeries server has become one of the most functionally rich relational platforms in the industry. DB2 UDB for iSeries is a member of the DB2 UDB family of products, which also includes DB2 for OS/390 and DB2 Universal Database. The DB2 UDB family is the IBM proposal in the marketplace of relational database systems and guarantees a high degree of application portability and a sophisticated level of interoperability among the various platforms that are participating in the family. 1.2 DB2 UDB for iSeries: An overview This section provides a quick overview of the major features of DB2 Universal Database for iSeries. A full description of the functions that are mentioned in this section can be found in several IBM manuals, for example:  Database Programming, SC41-5701  DDS Reference, SC41-5712  SQL Reference, SC41-5612 Chapter 1. Introducing DB2 UDB for iSeries 5 1.2.1 DB2 UDB for iSeries basics As previously mentioned, the major distinguishing characteristic of the iSeries server database manager is that it is part of the operating system. In practice, this means that the large majority of your iSeries server data is stored in the relational database. Although the iSeries server also implements other file systems in its design, the relational database on the iSeries server is the most commonly used by the customers. Your relational data is stored in the database, plus typical non-relational information, such as the source of your application programs. Physical files and tables Data on the iSeries server is stored in objects called physical files. Physical files consist of a set of records with a predefined layout. Defining the record layout means that you define the data structure of the physical file in terms of the length and the type of data fields that participate in that particular layout. These definitions can be made through the native data definition language of DB2 UDB for iSeries, called Data Description Specifications (DDS). If you are familiar with other relational database platforms, you are aware that the most common way to define the structure of a relational database is by using the data definition statements provided by the Structured Query Language (SQL). This is also possible on the iSeries server. The SQL terminology can be mapped to the native DB2 UDB for iSeries terminology for relational objects. An SQL table is equivalent to a DDS defined physical file. We use both terms interchangeably in this book. Similarly, table rows equate to physical file records for DB2 UDB for iSeries, and SQL columns are a synonym for record fields. Logical files, SQL views, and SQL indexes By using DDS, you can define logical files on your physical files or tables. Logical files provide a different view of the physical data, allowing columns subsetting, record selection, joining multiple database files, and so on. They can also provide physical files with an access path when you define a keyed logical file. Access paths can be used by application programs to access records directly by key or for ensuring uniqueness. On the SQL side, there are similar concepts. An SQL view is almost equivalent to a native logical file. The selection criteria that you can apply in an SQL view is much more sophisticated than in a native logical file. An SQL index provides a keyed access path for the physical data exactly the same way as a keyed logical file does. Still, SQL views and indexes are treated differently from native logical files by DB2 UDB for iSeries, and they cannot be considered to exactly coincide. “Database file” refers to any DB2 UDB for iSeries file, such as a logical or physical file, an SQL table, or view. Any database files can be used by applications for accessing DB2 UDB for iSeries data. DB2 UDB for iSeries in a distributed environment It is becoming more and more common for companies and businesses to organize their computing environment in a distributed way. The need to access remote data is constantly growing. DB2 UDB for iSeries provides several options for operating with remote platforms, both homogeneous and heterogeneous. The Distributed Data Management (DDM) architecture is the basis for distributed file access. You can create a DDM file on your iSeries server and have it direct your data access requests to a remote database file. This remote file can be another DB2 UDB for iSeries database file or a Customer Information Control System (CICS) managed data file residing on a host platform. Only native data access is allowed for DDM files. 6 Advanced Functions and Administration on DB2 Universal Database for iSeries On top of the DDM architecture, IBM has created the Distributed Relational Database Architecture (DRDA). DRDA defines the protocol by which an SQL application can access remote tables and data. DB2 UDB for iSeries participates in this architecture, as do all the products of the DB2 Family. This means that your DB2 UDB for iSeries database can be accessed by any SQL application running on another iSeries server or on DB2 for OS/390, DB2 Universal Database, or DB2 for VM. A DB2 UDB for iSeries application with embedded SQL can access relational data stored in a DB2 for OS/390, DB2 for VM, or on another iSeries server. The DRDA architecture is implemented directly into OS/400. IBM has also licensed DRDA to many other companies, such as Informix Software Inc., Ingres Corporation, and Oracle Corporation. The iSeries server provides several other interfaces for client platforms to access DB2 UDB for iSeries data. Client Access for iSeries is a rich product that allows broad interoperability between a PC client and the iSeries server. For database access, Client Access for iSeries provides the PC with:  A sophisticated file transfer function that allows subsetting of rows and columns  Remote SQL APIs that you can embed in your PC programs to access data stored in DB2 UDB for iSeries tables  An Open Database Connectivity (ODBC) interface to DB2 UDB for iSeries data that allows applications that use this protocol to access the iSeries server database Terminology Since the AS/400 system (which today is the iSeries server) was developed before SQL was widely-used, OS/400 uses different terminology than what SQL uses to refer to database objects. The terms and its SQL equivalent are found in Table 1-1. The terms have been interchanged throughout the book. Table 1-1 SQL and OS/400 term cross reference 1.2.2 DB2 UDB for iSeries advanced functions The main purpose of this redbook is to describe, in detail and with practical examples, the rich set of functions that have been implemented in DB2 UDB for iSeries. This section provides a quick overview of the most important advanced features. SQL term iSeries term Table Physical file View Non-keyed logical file Index Keyed logical file Column Field Row Record Schema Library, collection Log Journal Isolation level Commitment control level Chapter 1. Introducing DB2 UDB for iSeries 7 Referential integrity Referential integrity is a set of mechanisms by which a database manager enforces some common integrity rules among database tables. An example of a referential integrity rule is a customer master table and an invoice table. You do not want invoices related to non-existing customers (every invoice must reference a valid customer). As a consequence of this rule, it makes sense that, if somebody attempts to delete a customer with outstanding invoices, the delete operation is rejected. Without referential integrity implementation, the only way to ensure that these rules are enforced is by writing appropriate routines in the applications. With referential integrity, this kind of rule can be implemented directly into the database. Once the rules are defined, DB2 UDB for iSeries automatically enforces them for the users. Check constraint Check constraints are validity checks that can be placed on fields of database physical files and columns of SQL tables. It ensures that the value being entered in a column of a table belongs to the set of valid values defined for that column. For example, you may specify that the “legal” values for an employee evaluation field are defined as an integer, such as 2, 3, 4, or 5. Without the check constraint, a user can enter any integer value into such a column. To ensure that the actual value entered is as specified, you must use a trigger or code the rule in your application. DRDA and two-phase commit DRDA is the architecture that meets the needs of application programs requiring access to distributed relational data. This access requires connectivity to and among relational database managers. The database managers can run in like or unlike operating environments and communicate and operate with each other in a way that allows them to execute SQL statements on another computer system. There are several degrees of distribution of database management system functions. DB2 UDB for iSeries currently supports the following levels of distribution:  Remote unit of work With a remote unit of work, an application program executing in one system can access data at a remote system using SQL. A remote unit of work supports access to one database system within a unit of work (transaction). If the application needs to interact with another remote database, it has to commit or rollback the current transaction, stop the current connection, and start a new connection.  Distributed unit of work With a distributed unit of work, within one unit of work, an application executing in one system can direct SQL requests to multiple remote database systems. When the application is ready to commit the work, it initiates the commit, and commitment coordination is provided by a synchronization point manager. Whether an application can update multiple databases in one unit of work depends on the two-phase commit protocol support between the application's location and the remote systems. Procedures and triggers Stored procedures and triggers used to be part of the original book. Due to their importance we have decided to move them to the new redbook Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. The first part of this book discusses, in detail, referential integrity, check constraints, and DRDA and two-phase commit. 8 Advanced Functions and Administration on DB2 Universal Database for iSeries 1.3 DB2 Universal Database for iSeries sample schema Within the code of OS/400 V5R1M0, there is a stored procedure that creates a fully functioning database. This database contains tables, indexes, views, aliases, and constraints. It also contains data within these objects. This database is used in this book to illustrate the new Database Navigator functions announced with Operations Navigator V5R1M0. This database also helps with problem determination since the program is shipped with the OS/400 V5R1M0 code. By calling a simple program, you can create a duplicate of this database on any system running V5R1M0. This enables customers and support staff to work on the same database for problem determination. This database can also be used as a learning tool to explain the various functions available at V5R1M0 with Database Navigator. Furthermore, it provides a method for teaching applications programmers or new database administrators how relationships can be built on the iSeries server between tables, schemas, indexes, etc. Working on the same database provides the ability for customers around the world to see the new functionality at V5R1M0. It also simplifies the setup environment for the workshops that are created in the future for use by customers. You create the database by issuing the following SQL statement: CALL QSYS.CREATE_SQL_SAMPLE('SAMPLEDBXX') This statement can be found in the pull-down box of the Run SQL Script window example shown in Figure 1-1. Figure 1-1 Example display showing the schema CREATE statement Note: The schema name needs to be in uppercase. This sample schema will also be used in future DB2 Universal Database for iSeries documentation. Chapter 1. Introducing DB2 UDB for iSeries 9 As a group, the tables include information that describes employees, departments, projects, and activities. This information makes up a sample database demonstrating some of the features of DB2 Universal Database for iSeries. An entity-relationship (ER) diagram of the database is shown in Figure 1-2. Figure 1-2 Sample schema: Entity-relationship diagram The tables are:  Department Table (DEPARTMENT)  Employee Table (EMPLOYEE)  Employee Photo Table (EMP_PHOTO)  Employee Resume Table (EMP_RESUME)  Employee to Project Activity Table (EMPPROJACT)  Project Table (PROJECT)  Project Activity Table (PROJACT)  Activity Table (ACT)  Class Schedule Table (CL_SCHED)  In Tray Table (IN_TRAY) Indexes, aliases, and views are created for many of these tables. The view definitions are not included here. There are three other tables created that are not related to the first set:  Organization Table (ORG)  Staff Table (STAFF)  Sales Table (SALES) Note: Some of the examples in this book use the sample database that was just described. 10 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 11 Chapter 2. Using the advanced functions: An Order Entry application This chapter provides:  A description of the Order Entry scenario  The database structure  The logical flow of each application component  A highlight of the advanced functions used in this application 2 12 Advanced Functions and Administration on DB2 Universal Database for iSeries 2.1 Introduction to the Order Entry application This chapter describes how a simple Order Entry application can take advantage of the advanced functions that are available with DB2 UDB for iSeries. It provides a description of the complete application, in terms of logical flow and database structure. The actual implementation of this application can be found in the specific chapters where we exploit this application scenario to show you how to use the DB2 UDB for iSeries functions. By presenting an application scenario, we intend to show how the advanced DB2 UDB for iSeries functions can be applied to a real-life environment and the technical implications of using those functions. For this reason, the application may appear simplistic in some respects (for example, the user interface or some design choices). We present a simple, easy-to-understand scenario that includes most of the aspects we discuss throughout this redbook. We chose to develop the various components of the application using different programming languages to show how the various languages can interact with the DB2 UDB for iSeries functions. As mentioned previously, the stored procedures and triggers moved to a separate redbook called Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. 2.2 Order Entry application overview The Order Entry application shown in Figure 2-1 represents a simple solution for an office stationery wholesaler. Chapter 2. Using the advanced functions: An Order Entry application 13 Figure 2-1 Application overview: Interaction of DB2 UDB for iSeries functions This application has the following characteristics:  The wholesale company runs a main office and several branch offices.  A requirement of the branch offices is their autonomy and independence from the main office.  Data is, therefore, stored in a distributed relational database. Information about customers and orders is stored at the branch office, where the central system keeps information about the stock and suppliers.  A main requirement of this company is the logical consistency of the database. All orders, for example, must be related to a customer, and all the products in the inventory must be related to a supplier. This is why we need to use referential integrity in this database. See 3.4.3, “Another example: Order Entry scenario” on page 32, which describes how referential integrity can be configured for this particular scenario.  The sales representative contacts the customer over the telephone. Each sales representative is assigned a pool of customers. According to the policy of the sales division of this company, a sales representative is allowed to place orders only for a customer of their pool. This policy is needed to guarantee a fair distribution of the commissions on the sales representative’s turnover. This requirement can be effectively enforced by means of a trigger program that automatically checks the relationship between a customer and the sales representative when the order is placed. This is addressed in Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. Insert Order Header Insert Order Detail Finalize Order RI Remote SP Remote 2 P C Remote System (Head Office) Local System (Branch Office) Order Detail Order Header Sales/ Customer Restart TRI Update INV Customer Supplier Stock RI RI TRI RI LEGEND RI REFERRAL INTEGRITY CONSTRAINT TRI TRIGGER 2PC SP STORED PROCEDURE TWO PHASE COMMIT 14 Advanced Functions and Administration on DB2 Universal Database for iSeries  In placing an order, the sales representative first introduces some general data, such as the order date, the customer code, and so on. This process generates a row in the Order Header table.  The sales representative then inserts one or more items for that specific order. If the specific item is out of stock, we want the application to look in the inventory for an alternative article. The inventory is organized in categories of products and, on this basis, the application performs a search. Since the inventory table is located remotely, we use a DRDA(*) connection between the systems. In addition, since the process of searching the inventory may involve many accesses to the remote database, a stored procedure is called to carry out this task.  When the item or a replacement has been found, the inventory is updated, and a row is inserted in the local order detail table.  At this point, we want to release the inventory row to allow other people to place a new order for the same product. We commit the transaction at this time. DB2 UDB for iSeries ensures the consistency of the local and remote databases, thanks to the two-phase commitment control support.  When all order items have been entered, the order is finished and a finalizing order program is called. This program can: – Add the total amount of the order to the Customer table to reflect the customers' turnover – Update the total revenue produced by the sales representative on this customer – Update the total amount of the order in the Order Header table  An update event of the Order Header table starts another trigger program that writes the invoice immediately at the branch office.  As we mentioned, the “atomic” logical transaction is completed when a single item in the order has been inserted to reduce the locking exposures. If the system or the job fails, we must be able to detect incomplete orders. This can be done when the user restarts the application. A simple restart procedure will check for orders having the total equal to zero (not “finalized”). These orders are deleted and the stock quantity of all the items is increased by the amount that we had reserved during the order placement. We can also present a choice menu to the user, asking whether the incomplete orders should be finalized. 2.3 Order Entry database overview The Order Entry application is based on a distributed database. Each branch office location keeps all the data related to its own customers in its local database. The information concerning the items available in the warehouse is stored in the remote database at the head office. The local database consists of these physical files/tables:  CUSTOMER table: Contains the information related to the customers  ORDERHDR table: With the data related to where the Order items are stored  ORDERDTL table: Where each row represents a Detail of an Order  SALESCUS table: Keeps the relationship between a sales representative and the customers for whom that sales representative is authorized to place orders Chapter 2. Using the advanced functions: An Order Entry application 15 The central database consists of two tables:  STOCK table: Contains information about the contents of the warehouse  SUPPLIER table: Contains information related to the suppliers Table 2-1 through Table 2-7 on page 16 show the record layouts for the files of both local and central databases. Table 2-1 CUSTOMER table Table 2-2 ORDERHDR table Table 2-3 ORDERHDR table Field name Alias Type Description CUSTOMER_NUMBER CUSBR CHAR(20) Customer number CUSTOMER_NAME CUSNAM CHAR(20) Customer name CUSTOMER_TELEPHONE CUSTEL CHAR(15) Customer phone number CUSTOMER_FAX CUSFAX CHAR(15) Customer fax number CUSTOMER_ADDRESS CUSADR CHAR(20) Customer address CUSTOMER_CITY CUSCTY CHAR(20) Customer city CUSTOMER_ZIP CUSZIP CHAR(5) Customer ZIP code CUSTOMER_CRED_LIM CUSCRD DEC(11,2) Customer credit limit CUSTOMER_TOT_AMT CUSTOT DEC(11,2) Customer total amount Field name Alias Type Description ORDER_NUMBER ORHNBR CHAR(5) Order number CUSTOMER_NUMBER CUSBR CHAR(5) Customer number ORDER_DATE ORHTE DATE Order date ORDER_DELIVERY ORHDLY DATE Order delivery date ORDER_TOTAL ORHTOT DEC(11,2) Order total ORDER_SALESREP SRNBR CHAR(10) Sales Rep. number Field name Alias Type Description ORDER_NUMBER ORHNBR CHAR(5 Order number PRODUCT_NUMBER PRDNBR CHAR(5) Product number ORDERDTL_QUANTITY ORDQTY DEC(5,0) Order detail quantity ORDERDTL_TOTAL ORDTOT DEC(9,2) Order detail total 16 Advanced Functions and Administration on DB2 Universal Database for iSeries Table 2-4 SALESCUS table Table 2-5 SUPPLIER table Table 2-6 STOCK table Table 2-7 STOCKPIC table Field name Alias Type Description SALESREP_NUMBER SRNBR CHAR(10) Sales Rep. number CUSTOMER_NUMBER CUSBR CHAR(5) Customer number SALES_AMOUNT SRAMT DEC(11,2) Sales rep. total amount for this customer Field name Alias Type Description SUPPLIER_NUMBER SPLNBR CHAR(5) Supplier number SUPPLIER_NAME SPLNAM CHAR(20) Supplier name SUPPLIER_TELEPHONE SPLTEL CHAR(15) Supplier phone number SUPPLIER FAX SPLFAX CHAR(15) Supplier fax number SUPPLIER ADDRESS SPLADR CHAR(20) Supplier address SUPPLIER_CITY SPLCTY CHAR(20) Supplier city SUPPLIER_ZIP SPLZIP CHAR(5) Suppler ZIP code Field name Alias Type Description PRODUCT_NUMBER PRDNBR CHAR(5) Product number PRODUCT_DESC PRDDES CHAR(20) Product description PRODUCT_PRICE PRDPRC DEC(7,2) Product unit price PRODUCT_AVAIL_QTY PRDPRC DEC(5,0) Product available quantity SUPPLIER_NUMBER SPLNBR CHAr(4) Supplier number PRODUCT_CATEGORY PRDCAT CHAR(4) Product category PROD_MIN_STOCK_QTY PRDQTM DED(5,0) Product minimum stock quantity Field name Alias Type Description PRODUCT_NUMBER PRDNBR CHAR(5) Product number PRODUCT_PICTURE PRDPIC BLOB Product picture Chapter 2. Using the advanced functions: An Order Entry application 17 2.4 DB2 UDB for iSeries advanced functions in the Order Entry database Figure 2-2 shows the Order Entry database structure and how the advanced database functions have been implemented. Figure 2-2 Order Entry application database structure As stated in the overview of this chapter, the main objective of presenting this application scenario along with this specific database design is to show how the functions provided by DB2 UDB for iSeries can be used and how they can work together in a single application. Let's analyze Figure 2-2 from each function standpoint. 2.4.1 Referential integrity On both the local and the remote system, the physical files/tables previously described represent entities tied to each other by logic and business relationships:  Relationships among the CUSTOMER, ORDERHDR, and SALESCUS tables: We want every order to refer to an existing customer, and we want to prevent anybody from deleting a customer that has related orders. Similarly, each sales representative must be in charge of existing customers, so that each sales representative in the SALESCUS file must be associated to a customer code that exists in the CUSTOMER table. LOCAL SYSTEM REMOTE SYSTEM TWO PHASE COMMIT SUPPLIER SPLNBR PK STOCK PRDNBR SPLNBR PK FK LEGEND: PK - PRIMARY KEY FK - FOREIGN KEY STORED PROCEDURE CUSTOMER SALESREP SRNBR CUSNBR CUSNBR PK Update Trigger ORHNBR CUSNBR PK FK Update Trigger Insert Trigger ORHNBR PRDNBR ORDERHDR ORDERDTL PK FK 18 Advanced Functions and Administration on DB2 Universal Database for iSeries These two relationships are described in Figure 2-2, where the referential integrity network for our database is explained.  Relationship between the ORDERHDR and ORDERDTL tables: We require that every detail item in the Order Detail table be related to an existing header in the Order Header table. Additionally, when an order has to be removed, we want the detail information to be deleted as well. This business rule is translated into the arrow linking the ORHNBR column in ORDERDTL to the same column in the ORDERHDR table.  Relationship between the STOCK and SUPPLIER tables: At the remote side, we have a business relationship between the STOCK and SUPPLIER tables. We need to know who provides us with each of our products, so we do not want to keep an item in the STOCK table if its supplier is not present in the SUPPLIER table. For the same reason, we cannot allow the deletion of a supplier as long as we have a product provided by that supplier stored in the STOCK table. This business rule is represented by the arrow linking the SPLNBR column in the STOCK file to the same one in the SUPPLIER table. We want these relationships to be enforced at any time, even when data is changed through interfaces, such as Interactive SQL or Data File Utility (DFU). For this reason, this scenario provides a good example of referential integrity implementation. As described in 3.2, “Referential integrity concepts” on page 22, these relationships can easily be translated into a proper referential integrity constraint. Once these constraints are defined, DB2 UDB for iSeries automatically keeps our data consistent with the business rules, regardless of the kind of interface is used to change the contents of the database. Application programmers do not need to implement any integrity checking in their applications, which provides benefits in terms of ease of development and maintenance. 2.4.2 Two-phase commit The company database is distributed between a central site, where the STOCK and SUPPLIER table are located, and several remote branch offices. The warehouse is located at the central site and is centrally managed there. On the other hand, the information related to the customers and orders is independently managed at each branch office. Consequently, our application will access both the local and the remote database. In a single unit of work, the application updates both the STOCK table on the remote side and the ORDERDTL table at the local side. DRDA-2 and two-phase commit guarantee the consistency of the entire database, even after system failures. See Chapter 5, “DRDA and two-phase commitment control” on page 83, for a complete discussion of DRDA-2 and two-phase commit. © Copyright IBM Corp. 1994, 1997, 2000, 2001 19 Part 2 Advanced functions This part describes, in detail and with practical examples, the rich set of functions that have been implemented in DB2 UDB for iSeries. Among the most important advanced features, you will find:  Referential integrity  Check constraints  DRDA and two-phase commit  Import and Export utilities Part 2 20 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 21 Chapter 3. Referential integrity This chapter discusses:  Referential integrity concepts  Defining referential integrity relationships  Creating referential integrity constraints  Constraint enforcement  Application impacts of referential integrity  Constraint management 3 22 Advanced Functions and Administration on DB2 Universal Database for iSeries 3.1 Introduction Referential integrity deals with relationships between data values in a relational database. These data relationships are usually closely tied with business rules. For example, every part order must reference a valid customer. In DB2 UDB for iSeries, once these relationships and rules are defined to the database manager, the system automatically ensures that they are enforced at all times, regardless of the interface used to change the data (an application, the Data File Utility, Interactive SQL, and so on). For example, in the Order Entry database described in 2.3, “Order Entry database overview” on page 14, all the records in the Order file should have a customer number matching an existing customer in the Customer file. Moreover, a customer should not be removed when it has existing orders in the database. These are the types of data relationships that benefit from referential integrity. In a referential integrity environment, such relationships or business rules are defined to the DB2 UDB for iSeries with referential constraints. DB2 UDB for iSeries supports both a native and an SQL interface for associating constraints with your database files. Before referential integrity was available in DB2 UDB for iSeries, application programmers were responsible for enforcing these types of relationships in their programs. This programming effort is no longer needed now that the referential integrity support has been implemented in DB2 UDB for iSeries. Database management system (DBMS)-supported referential integrity provides greater application development productivity since programmers now have less code to write, test, and maintain. The integrity enforcement is done automatically by DB2 UDB for iSeries. Application integrity enforcement also had no protection against data changes made through other interfaces, such as an interactive PC user. The constraints are now enforced in all environments resulting in greater data integrity and consistency, leaving less room for user error. Referential integrity may also improve your application performance because the integrity checks performed by DB2 UDB for iSeries are more efficient than those done in an application program. The DBMS can use more efficient methods for enforcing these relationships at a lower level in the system that eliminates a majority of the overhead associated with application-level enforcement. 3.2 Referential integrity concepts In DB2 UDB for iSeries, the following table (physical file) constraints have been introduced:  Unique constraints  Primary key constraints  Referential constraints  Check constraints You can find a detailed definition of these constraints in Database Programming, SC41-5701, and SQL Reference, SC41-5612. This section provides the concepts and the basic definitions that are necessary for referential integrity. The terms table and physical file constraints refer to the same database definitions. Chapter 3. Referential integrity 23 Unique constraint A unique constraint is the rule that identifies a unique key in a database file. A unique key is a field or a set of fields in a physical file that must be unique, ascending, and can contain null-capable fields. Primary key constraint A primary key constraint identifies a primary key in a database file. A primary key is a field or a set of fields in a physical file that must be unique, ascending, and cannot contain null-capable fields. Parent key A parent key is a field or a set of fields in a physical file that must be unique, ascending, and contain null values. Both a primary and unique key can be the parent key in a referential constraint. Foreign key A foreign key is a field or a set of fields in a physical file whose value, if not null, must match a value of the parent key in the related referential constraint. The value of a foreign key is null if at least one of the key fields is null. Referential constraint A referential constraint is the file attribute that causes the database to enforce referential integrity for the defined relationship. Referential integrity Referential integrity is the state of a database in which the values of the foreign keys are valid. That is, each non-null foreign key value has a matching parent key value. Parent file The parent file contains the parent key in a referential constraint. Dependent file The dependent file contains the foreign key in a referential constraint. Referential constraint rules The referential constraint definitions also include delete and update rules that define which actions should be taken by the DBMS when a parent key value is updated or deleted.  Delete rule A delete rule is applied when a row in the parent file is deleted. A record is deleted from the parent file. Its parent key has matching foreign key values in the dependent file with: – A CASCADE rule: The system also deletes all of the matching records in the dependent file. – A SET NULL rule: The system sets all null-capable fields in the matching foreign keys to null. The foreign key fields that are not null-capable are not updated. Note: A new function was added in V4R2M0 that allows a primary key constraint to be defined where one or more columns in the key allow NULL values. When this condition is detected, a check constraint is implicitly added to the file to ensure that the column will not contain NULL values. This means that this check constraint will prevent any NULL values from being inserted into columns defined for the primary key. 24 Advanced Functions and Administration on DB2 Universal Database for iSeries – A SET DEFAULT rule: The system sets the matching foreign key values to their corresponding default value. This default foreign key value must also have a matching parent key value. – A RESTRICT rule: If at least one dependent record exists, the system prevents the parent key deletion. An exception is returned. – A NO ACTION rule: This is similar to the restrict rule. However, enforcement is delayed until the logical end of the operation. If the operation results in a violation, the system prevents the parent key deletion and returns an exception to the user.  Update rule An update rule is applied when a parent key is updated. An update is issued for a parent key value that is matching some foreign keys in the dependent file with: – A RESTRICT rule: If at least one dependent record exists, the system prevents the parent key update. An exception is returned. – A NO ACTION rule: This is the same as the Restrict rule. However, enforcement is delayed until the logical end of the operation. If the operation results in a violation, the system prevents the parent key update and returns an exception to the user.  Check constraint Check constraint ensures that users authorized to change a column's value use only values that are valid for that column. Referential cycle or cyclic constraints A set of referential constraints forms a referential cycle if any file in the chain depends on itself. A simple example of a referential cycle is given by self-referencing constraints (referential constraints that have a primary and foreign key in the same file). See 3.4.4, “Self-referencing constraints” on page 34, for further discussion and an example. Check pending This is the state of a referential constraint when potential mismatches exist between foreign and parent keys for a constraint relationship. 3.3 Defining a referential integrity relationship This section describes the considerations that you should take into account when setting up a referential integrity relationship:  Prerequisites for a referential integrity constraint  Journaling and commitment control requirements for referential integrity constraints  Referential integrity access path considerations  Verifying the current integrity of your database 3.3.1 Constraint prerequisites You can find a full description of the prerequisites and limitations on the database files and the constraints themselves in Database Programming, SC41-5701. The basic requirement is that your parent key and foreign key must have matching field attributes and definitions. This section also points out some other considerations. Chapter 3. Referential integrity 25 When defining a referential constraint, the foreign key and parent key null attributes do not have to exactly match. When a foreign key contains null-capable fields, DB2 UDB for iSeries treats the entire foreign key value as null whenever any of the foreign key fields is null. This behavior is defined in the standards as a match option of no match. Currently, this is the only match option supported by DB2 UDB for iSeries. The null foreign key behavior is important because referential integrity only ensures that non-null foreign keys have a matching parent key value. You will experience better performance when your foreign key fields and parent key fields have identical null attributes. In fact, the non-null field attributes deliver the best performance. Ideally, your parent and foreign key fields should be fairly stable, something similar to a person's social security number. This is due to the fact that, to guarantee integrity, the system must verify referential integrity each time your parent and foreign key values change. Therefore, the less your foreign and parent keys change, the less time the DBMS spends verifying referential integrity. 3.3.2 Journaling and commitment control requirements When a referential constraint is defined with a delete or update rule other than RESTRICT, the system has to perform some actions on the corresponding foreign keys each time a delete or an update of the parent key takes place. For a delete case, for example, it deletes the matching dependent records when the delete rule is CASCADE. The DBMS must ensure that the parent key record and all matching dependent records are deleted. All of these record deletions must be considered as one logical operation. To ensure the atomicity of this operation, the system requires journaling and commitment control in some cases. If the delete or update rule is other than RESTRICT, both the parent and the dependent files must be journaled. In addition, the parent and dependent file must be journaled to the same journal receiver. See 3.6, “Journaling and commitment control” on page 43, for further discussion. Since the restrict and no action rules cause similar rule enforcement, the restrict rule provides better performance since journaling and commit are not required. 3.3.3 Referential integrity and access paths DB2 UDB for iSeries uses access paths (or indexes) to perform the referential constraint enforcement as efficiently as possible. The DBMS, however, does not require its own access path for this enforcement. When a constraint is added to a physical file, the system first tries to share an existing path. If one cannot be shared, a new access path is created. This sharing is similar to the sharing performed for logical files today. When a constraint is added to a physical file and an access path matching the constraint criteria exists, this access path is shared, and the ownership of the access path itself is transferred to the physical file. Similarly, if a logical file access path is shared, access path ownership is transferred from the logical file to the physical file. If an existing access path cannot be shared, a new one is created and owned by the physical file. The user does not have direct access to this newly created access path. Similarly, when a logical file or an SQL index is created on a physical file with existing constraints, the system tries to share the constraint access paths. See Database Programming, SC41-5701, for detailed information about access path sharing. 26 Advanced Functions and Administration on DB2 Universal Database for iSeries If the existing access path has more than one key field, the constraint only shares that access path if they are defined with the same key fields in the same sequence. Partial sharing is not allowed. If the existing access path has been created over FLD1, FLD2, and FLD3, when you create a constraint, that access path is shared only if the key of the constraint exactly matches FLD1, FLD2, and FLD3. If, for example, the constraint is defined over just FLD1 and FLD2, the system has to build a new access path. When an SQL index or logical file is deleted and the associated access path is shared by a constraint, the actual access path is left, and ownership remains with the associated physical file. Similarly, when a file constraint is removed and the access path is being shared, ownership is transferred back to the corresponding logical file or SQL index. If the constraint is not sharing an access path, both the constraint and the associated access path are removed. Physical file constraints are not separate objects, such as logical files and SQL indexes. Referential integrity constraints and their associated access paths are part of the file description. In fact, when a physical file is saved, the system also saves all the constraints and their associated access paths that have been defined for that file. On the contrary, when you save a physical file that has related logical files, the user is responsible for saving these logical files. For this reason, when a unique keyed access path is required, define a unique constraint instead of a logical file or an SQL index. Since they provide a keyed access path, physical file constraints are similar to logical files. If you run an SQL query on a file with constraints defined over it, the query optimizer evaluates all the access paths available: logical files, SQL indexes, and constraint access paths (see the example in Figure 3-1). For example, consider the ORDERDTL file in the Order Entry database. This file has a primary key constraint defined on the ORHNBR and PRDNBR fields, and a referential constraint with foreign key ORHNBR and parent key ORHNBR in the ORDERHDR file. We may create an SQL index ORDDTLIDX with the key fields ORHNBR and PRDNBR: CREATE INDEX ORDENTL/ORDDTLIDX ON ORDENTL/ORDERDTL (ORDER_NUMBER, PRODUCT_NUMBER) In this case, we find the following message in the job log: CPI3210: File ORDDTLIDX in ORDENTL shares access path. The second-level text specifies that the logical owner of the access path is member ORDERDTL in the ORDENTL/ORDERDTL file. On the other hand, you may create the ORDERDTL file without a primary key constraint and create a unique logical file over ORDER_NUMBER and PRODUCT_NUMBER. Afterwards, if you add a referential constraint over the same fields (see 3.4, “Creating a referential constraint” on page 28), you receive the following message: CPI3210: File ORDERDTL in ORDENTL shares access path. The second-level text specifies that the logical owner of the access path is member ORDERDTL in the ORDENTL/ORDERDTL file. The system shares the existing access path built when the logical file was created, but the ownership of the access path itself is transferred to the physical file ORDERDTL. If you update a record in ORDERDTL, the message shown in Figure 3-1 appears in the job log. Chapter 3. Referential integrity 27 Figure 3-1 SQL optimizer uses constraint access paths The second-level text for the message (shown in bold) is shown in Figure 3-2. Figure 3-2 Physical file constraints evaluated by the optimizer Display All Messages System: SYSTEM03 Job . . : P23KRZ75D User . . : ITSCID07 Number . . . : 003869 4 > DSPJOB ODP created. Blocking used for query. All access paths were considered for file ORDERDTL. Additional access path reason codes were used. Arrival sequence access was used for file ORDERDTL. ODP created. ODP deleted. 1 rows updated in ORDERDTL in ORDENTL. More... Press Enter to continue. F3=Exit F5=Refresh F12=Cancel F17=Top F18=Bottom Additional Message Information Message ID . . . . . . : CPI432C Severity . . . . . . . : 00 Message type . . . . . : Information Date sent . . . . . . : 05/19/01 Time sent . . . . . . : 19:20:08 Message . . . . : All access paths were considered for file ORDERDTL. Cause . . . . . : The OS/400 Query optimizer considered all access paths built over member ORDERDTL of file ORDERDTL in library ORDENTL. The list below shows the access paths considered. If file ORDERDTL in library ORDENTL is a logical file then the access paths specified are actually built over member ORDERDTL of physical file ORDERDTL in library ORDENTL. Following each access path name in the list is a reason code which explains why the access path was not used. A reason code of 0 indicates that the access path was used to implement the query. ORDENTL/ORDERDTL 4, ORDENTL/ORDDTL_HORD The reason codes and their meanings follow: 1 - Access path was not in a valid state. The system invalidated the access path. 2 - Access path was not in a valid state. The user requested that the access path be rebuilt. 3 - Access path is a temporary access path (resides in library QTEMP) and was not specified as the file to be queried. 4 - The cost to use this access path, as determined by the optimizer, was higher than the cost associated with the chosen access method. More... Press Enter to continue. F3=Exit F6=Print F9=Display message details F12=Cancel F21=Select assistance level 28 Advanced Functions and Administration on DB2 Universal Database for iSeries As highlighted in bold in Figure 3-2, ORDENTL/ORDERDTL is the access path shared by the primary key and the SQL index ORDDTLIDX. ORDENTL/ORDDTL_HORD is the access path created for the referential constraint ORDDTL_HORD. File availability When adding a referential constraint, the DBMS exclusively locks the file and access paths involved. The system must then verify that every foreign key value is valid. This add and verification process can take as little as several seconds or minutes to complete. When the existing files contain a large number of records (hundreds of millions), this process can possibly run for hours. The add process is much quicker when the constraint access paths are shared instead of building them from scratch. Consider the impact on file availability before you create a constraint during normal system activity. Referential integrity verification queries Before you create referential constraints over existing files, you may want to check if any mismatches exist between your candidate parent and foreign keys. Unmatched (or orphan) foreign key values can be determined with one of the following queries. In these queries, DEPFILE is the dependent file with a foreign key consisting of FKEYFLD, while PARFILE and PKEYFLD are the parent file and parent key: SELECT * FROM mylib/DEPFILE WHERE FKEYFLD NOT IN (SELECT PKEYFLD FROM mylib/PARFILE) or OPNQRYF FILE((mylib/DEPFILE) (mylib/PARFILE)) FORMAT(MYLIB/DEPFILE) JFLD((DEPFILE/FKEYFLD PARFILE/PKEYFLD)) JDFTVAL(*ONLYDFT) In most cases, the queries take longer to run than the system verification process performed during the execution of the Add Physical File Constraint (ADDPFCST) or Change Physical File Constraint (CHGPFCST) commands. Be careful with the verification queries on large files. This may be a good place for using DB2 UDB for iSeries Query Governor (CHGQRYA). 3.4 Creating a referential constraint This section discusses the interfaces and commands that you can use to add a referential constraint and create a referential integrity network. A referential integrity network is a set of physical files linked by referential constraints. We also use the term cascade network to indicate a referential integrity network where the constraints are linked by delete cascade rules. Two interfaces are available for creating physical file (or table) constraints:  The native interface that supplies the new CL command, Add Physical File Constraint (ADDPFCST), to add a physical file constraint to a database file  The SQL interface that provides: – CREATE TABLE statement: Has been enhanced with the CONSTRAINT clause that allows a table constraint to be added when creating a table. – ALTER TABLE statement: Allows a table constraint to be added to an existing table with the ADD clause. Chapter 3. Referential integrity 29 Either interface can be used to define a constraint over physical files or SQL tables. However, only SQL supports a constraint definition at table creation time. 3.4.1 Primary key and unique constraints The first step in creating a referential constraint is to identify the parent key. You can use a unique or primary key constraint to identify the parent key. Only one primary key constraint can be associated with a physical file. However, you can define multiple unique constraints over the same file. When a primary key constraint is added to a physical file, the associated access path becomes the primary access path of the file (for example, the access path used to access the file when the OPNDBF command is issued). If you want to define a primary key or a unique constraint over your CUSTOMER file with customer number (CUSNBR) as the parent key, you have several options from which you can choose. At creation time, you can define the primary key or unique constraint on the SQL CREATE TABLE statement: CREATE TABLE mylib/CUSTOMER (CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR (5) NOT NULL , ............ other fields ........... , CONSTRAINT customer_key PRIMARY KEY (CUSNBR)) Or, similarly, you can define: CREATE TABLE mylib/CUSTOMER (CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR (5) NOT NULL , ............ other fields ........... , CONSTRAINT customer_key UNIQUE (CUSNBR)) You can also easily add constraints to existing files. In this case, the existing records must not contain any duplicate values for the unique or primary key fields. If the system finds duplicate values, the constraint is not added, and an error message is returned. With the native interface, you must issue the ADDPFCST command with the following parameters: ADDPFCST FILE(mylib/CUSTOMER) or ADDPFCST FILE(mylib/CUSTOMER) TYPE(*PRIKEY) TYPE(*UNQCST) KEY(CUSNBR) KEY(CUSNBR) CST(customer_key) CST(customer_key) With SQL, the equivalent action takes place with the following two ALTER TABLE statements: ALTER TABLE mylib/CUSTOMER or ALTER TABLE mylib/CUSTOMER ADD CONSTRAINT customer_key ADD CONSTRAINT customer_key PRIMARY KEY (CUSNBR) UNIQUE (CUSNBR) Note: In DB2 UDB for iSeries, the SQL interfaces allow you to specify a column-name longer than 10 characters. If the column-name is longer than 10 characters, a system-column name is automatically generated. The SQL constraint interface supports both the column-name and the system-column-name. In contrast, only the system-column-name can be specified when using the native interface for constraint processing. See the SQL Reference, SC41-5612, for further information. 30 Advanced Functions and Administration on DB2 Universal Database for iSeries If the physical file that was created is uniquely-keyed (with DDS), the associated access path is the primary access path and a potential parent key. In this case, a primary key constraint can be created over this file only when its fields match those of the file's primary access path. A unique constraint can be defined over any set of fields in the file capable of being a unique key. If a physical file was not created as a unique-keyed file, a user cannot add any primary key constraint to the file. Only unique constraints can be added. If the parent file does not have an existing keyed access path that can be shared for the primary key or unique constraint, the system creates one constraint. 3.4.2 Referential constraint Now consider the Order Entry Database structure and the business rule existing between CUSTOMER and ORDERHDR files as described in 2.4.1, “Referential integrity” on page 17. In that scenario, note these points:  A user does not want anyone to create an order for a customer that does not exist in the database. This means that we want to prevent anyone from inserting a new record in the ORDERHDR file if its corresponding Customer Number (CUSNBR) is not in the CUSTOMER file. This rule can be translated into a referential integrity constraint between CUSTOMER (parent file) and ORDERHDR (dependent file), where CUSNBR in CUSTOMER is the parent key and CUSNBR in ORDERHDR the foreign key.  In addition, the user wants to prevent updates or removals of a customer in the CUSTOMER file when outstanding orders exist in the ORDERHDR file for this customer. To ensure this data relationship, the delete and update rule should be set to RESTRICT or NOACTION. Both the delete and update rules use RESTRICT for this particular example. Using SQL, the constraint can be defined when we create the ORDERHDR table: CREATE TABLE mylib/ORDERHDR (ORDER_NUMBER FOR COLUMN ORHNBR CHAR (5) NOT NULL , CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR (5) NOT NULL , ........ other fields ............. , CONSTRAINT orderhdr_cnbr FOREIGN KEY (CUSNBR) REFERENCES mylib/CUSTOMER (CUSNBR) ON DELETE RESTRICT ON UPDATE RESTRICT) Otherwise, if the ORDERHDR file already exists, use a native interface: ADDPFCST FILE(mylib/ORDERHDR) TYPE(*REFCST) KEY(CUSNBR) CST(orderhdr_cnbr) PRNFILE(mylib/CUSTOMER) PRNKEY(CUSNBR) DLTRULE(*RESTRICT) UPDRULE(*RESTRICT) Or, use the SQL interface: ALTER TABLE mylib/ORDERHDR ADD CONSTRAINT orderhdr_cnbr FOREIGN KEY (CUSNBR) REFERENCES mylib/CUSTOMER (CUSNBR) ON DELETE RESTRICT ON UPDATE RESTRICT Chapter 3. Referential integrity 31 During the creation of this referential constraint, the DBMS first tries to share an existing access path for the foreign key. If one cannot be shared, the DBMS creates an access path. Once the foreign key access path is identified, DB2 UDB for iSeries then verifies that every non-null foreign key value has a matching parent key. If the system finds invalid foreign key values during the creation of the referential constraint, the constraint is still added to the file. The DBMS also automatically disables the referential constraint and marks the relationship as check pending. However, if invalid foreign key values are found during constraint creation through the SQL interface, the constraint is not added to the file. Implicit creation of a primary key constraint DB2 UDB for iSeries allows you, in some cases, to define a referential constraint on a dependent file even if there is no primary or unique key constraint defined on the parent file. In these cases, a primary key constraint with a system-generated name is implicitly added to the parent file. A requirement for the implicit creation of the primary key constraint is that the fields of the parent file, chosen as parent key fields, satisfy the conditions for parent keys: unique and not null-capable. They must also exactly match the attributes of the foreign key. Figure 3-3 shows a situation where an implicit primary key constraint is being created. Figure 3-3 Implicit creation of a primary key constraint In the scenario previously described, the ADDPFCST statement generates two constraints:  MYCST, which is the referential constraint for the DEPENDF file  A system-generated constraint that is a primary key constraint on the PARENTF file This option is available only by using the ADDPFCST command. No implicit primary key is ever created by a CREATE TABLE or ALTER TABLE specifying a FOREIGN KEY constraint. F1 F2 DEPENDF CREATE TABLE mycoll/dependf (F1 char(10) NOT NULL (F2 char(15), ........) A B ..... PARENTF CREATE TABLE mycoll/parentf (A char(10) NOT NULL, (B char(15) NOT NULL,.... ADDPFCST FILE(MYCOLL/DEPENDF TYPE(*REFCST) KEY(F1) CST(MYCST) PRNFILE(MYCOLL/PARENTF PRNKEY(A) DLTRULE(*RESTRICT) UPDRULE(*RESTRICT) ..... 32 Advanced Functions and Administration on DB2 Universal Database for iSeries Multiple constraints You can add multiple constraints to a physical file in a single step by using the SQL CREATE TABLE statement, for example: CREATE TABLE mylib/ORDERHDR (ORDER_NUMBER FOR COLUMN ORHNBR CHAR (5) NOT NULL , CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR (5) NOT NULL , ........ other fields ............. , CONSTRAINT orderhdr_key PRIMARY KEY (ORHNBR) CONSTRAINT orderhdr_cnbr FOREIGN KEY (CUSNBR) REFERENCES mylib/CUSTOMER (CUSNBR) ON DELETE RESTRICT ON UPDATE RESTRICT) This statement creates an ORDERHDR file with an ORHNBR field as the primary key and CUSNBR as the foreign key in a referential constraint having CUSNBR in the CUSTOMER file as a parent key and both the delete and the update rules set to RESTRICT. 3.4.3 Another example: Order Entry scenario You now set up the referential integrity network for the Order Entry database. All the business rules described in 2.4.1, “Referential integrity” on page 17, can be translated into physical file constraints.  Key fields definition: – Customer_Number (CUSNBR) must be unique in the CUSTOMER file. – Order_Number (ORHNBR) must be unique in the ORDERHDR file. – Order_Number plus Product_Number (ORHNBR plus PRDNBR) must be unique in the ORDERDTL file. – SalesRep_Number plus Customer_Number (SRNBR plus CUSNBR) must be unique in the SALESCUS file. – Supplier_Number (SPLNBR) must be unique in the SUPPLIER file. – Product_Number (PRDNBR) must be unique in the STOCK file. Each of these identifies the primary access path and can potentially be defined as a parent key.  An order should not be inserted into the ORDERHDR file unless it references an existing customer in the CUSTOMER file. This relationship identifies a referential constraint between ORDERHDR and the CUSTOMER file. A customer should not be deleted or have their customer number changed when outstanding orders for this customer exist in the ORDERHDR file. This relationship can be enforced with the delete and update rules set to RESTRICT.  An order detail entry should not be inserted into the ORDERDTL file without referencing a valid order number in the ORDERHDR file. This relationship identifies a referential constraint between the ORDERDTL and ORDERHDR files. When an order is deleted, all of its order detail rows have to be deleted. An order number should not be updated when it has existing detail rows in the ORDERDTL file. This leads to choosing a delete rule of CASCADE and an update rule of RESTRICT.  A sales representative should not be inserted in the SALESCUS file until the associated customer exists in the CUSTOMER file. This identifies a referential constraint between the SALESCUS and CUSTOMER files. When a customer is removed, the corresponding sales representative information should be removed. Again, the customer number cannot Chapter 3. Referential integrity 33 be changed if it is referenced by records in the SALESCUS. Therefore, the update rule should be RESTRICT, and the delete rule should be CASCADE. Let's focus on the local database (the CUSTOMER, ORDERHDR, ORDERDTL, and SALESREP files) as shown in Figure 3-4. Figure 3-4 Order Entry referential integrity network To define referential constraints, the parent key has to exist before creating the referential constraint. Follow this process: 1. Create the CUSTOMER file with a primary constraint on CUSNBR: ADDPFCST FILE(mylib/CUSTOMER) TYPE(*PRIKEY) KEY(CUSNBR) CST(CustKey) 2. Create the SALESCUS file with: – A unique constraint on SRNBR, plus CUSNBR: ADDPFCST FILE(mylib/SALECUS) TYPE(*UNQCST) KEY((CUSNBR SRNBR)) CST(SalesCusKey) – A referential constraint with CUSNBR as a foreign key and CUSTOMER as a parent file: ADDPFCST FILE(mylib/SALECUS) TYPE(*REFCST) KEY(CUSNBR)CST(SalesCusCNbr) PRNFILE(mylib/CUSTOMER) PRNKEY(*PRIKEY) DLTRULE(*CASCADE) UPDRULE(*RESTRICT) 3. Create the ORDERHDR file with: – A primary constraint on ORHNBR: ADDPFCST FILE(mylib/ORDERHDR) TYPE(*PRIKEY) KEY(ORHNBR) CST(OrderHKey) – A referential constraint with CUSNBR as a foreign key and CUSTOMER as a parent file: ADDPFCST FILE(mylib/ORDERHDR) TYPE(*REFCST) KEY(CUSNBR) CST(OrderHdrCNbr) ORDERDTL delete CASCADE update RESTRICT CUSTOMER SALESCUS ORDERHDR delete CASCADE update RESTRICT delete CASCADE update RESTRICT 34 Advanced Functions and Administration on DB2 Universal Database for iSeries PRNFILE(mylib/CUSTOMER) PRNKEY(*PRIKEY) DLTRULE(*RESTRICT) UPDRULE(*RESTRICT) 4. Create ORDERDTL file with: A referential constraint with ORHNBR as a foreign key and ORDERHDR as a parent file: ADDPFCST FILE(mylib/ORDERDTL) TYPE(*REFCST) KEY(ORHNBR)CST(OrderHdrNum) PRNFILE(mylib/ORDERHDR) PRNKEY(ORHNBR) DLTRULE(*CASCADE) UPDRULE(*RESTRICT) Here is an example of the SALESCUS file using the SQL Create Table interface: CREATE TABLE ordentl/SALESCUST (SALESREP_NUMBER FOR COLUMN SRNBR CHAR(10) NOT NULL, CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR(5) NOT NULL, SALES_AMOUNT FOR COLUMN SRAMT DEC(11,2) NOT NULL WITH DEFAULT, CONSTRAINT salescus_key PRIMARY KEY (SRNBR, CUSNBR), CONSTRAINT salescus_cnbr FOREIGN KEY (CUSNBR) REFERENCES ordentl/CUSTOMER (CUSNBR) ON DELETE CASCADE ON UPDATE RESTRICT) 3.4.4 Self-referencing constraints A self-referencing constraint is a referential constraint that have a primary and foreign key in the same physical file. You can use these constraints when you want to enforce a hierarchical structure on your data because a self-referential constraint implements a tree-relationship among the records of your file where the root of the tree has a null foreign key value. When adding data to a file with a self-referential constraint, you have to follow a precise sequence. You need to start by inserting the “root” value. For example, in a company, the EMPLOYEE file contains all the employees; Employee_Number (EMPNO) is the primary key of the file. On the other hand, the manager of an employee must also be an employee and has to be the parent key of their associated employee records in the same file. In this case, you need to define a referential constraint with MGRID as a foreign key and EMPNO as a parent key. EMPLOYEE is both a parent and a dependent file: CREATE TABLE TEST/EMPLOYEE (EMPID INT NOT NULL WITH DEFAULT , NAME CHAR(30) NOT NULL WITH DEFAULT , MGRID INT , DEPTNO INT , POSITION CHAR(30) NOT NULL WITH DEFAULT , CONSTRAINT employee_key PRIMARY KEY (EMPID), CONSTRAINT employee_mgr FOREIGN KEY (MGRID) REFERENCES test/EMPLOYEE (EMPID) ON DELETE SET NULL ON UPDATE RESTRICT) In the EMPLOYEE file example, you can only insert an employee record if the corresponding manager has already been inserted. Therefore, the first record to insert is for the Chief Executive Officer. This record's foreign key value is NULL. Afterwards, your insertions follow each branch of the hierarchy down to the lowest level. Chapter 3. Referential integrity 35 3.5 Constraints enforcement The enforcement of referential constraints is performed during any update or deletion of parent records and any time a dependent record is updated or deleted. 3.5.1 Locking considerations The DBMS uses different locks on the parent and dependent rows when enforcing referential constraints. The lock type depends on the type of enforcement being performed and the delete and update rules. Foreign key enforcement The sequence for inserting a dependent row or updating a foreign key value to a non-null value is:  A shared lock (*SHRUPD) is obtained on the dependent file and file member.  An update lock is obtained on the dependent record being inserted or updated.  A read lock is established on the matching record of the parent file, if it exists. If a matching parent key value does not exist, a referential constraint violation is signaled (CPF502D), and the requested operation is rolled back. All locks are released at the end of the operation. NOACTION and RESTRICT rule enforcement NOACTION and RESTRICT rule enforcement do not require any data changes to the matching dependent records. The DBMS immediately performs RESTRICT enforcement. NOACTION enforcement is delayed until the logical end of the operation (see the example in Figure 3-7 on page 38). The sequence for updating or deleting a parent key value with a NOACTION or RESTRICT rule is: 1. A shared lock (*SHRUPD) is obtained on the parent file and file member. 2. An update lock is obtained on the parent record being deleted or updated. 3. A read lock is established on the first matching record in the dependent file, if any. If a matching foreign key value exists, a referential constraint violation is signaled (CPF503A), and the requested operation is rolled back. All locks are released at the end of the operation. CASCADE, SET NULL, and SET DEFAULT rules When the delete rule is CASCADE, SET NULL, or SET DEFAULT, deleting a parent record that has matching rows in the dependent file causes delete or update operations on the matching dependent rows. The sequence for deleting a parent key value with CASCADE, SET NULL, or SET DEFAULT rules is:  A shared lock (*SHRUPD) is obtained on the parent file and file member.  An update lock is obtained on the parent record being deleted.  A shared lock (*SHRUPD) is obtained only on the dependent file member. The system also logically opens and allocates the dependent file at this time.  All matching dependent records (if any) are allocated exclusively with an update lock, and the corresponding update or delete operation is executed. 36 Advanced Functions and Administration on DB2 Universal Database for iSeries These locks are released at the end of the logical operation or the next explicit user commit. The DBMS does not logically close and de-allocate the dependent file until the parent file is closed. Therefore, other system functions, such as CLRPFM, that need exclusive access to a file cannot work on the dependent file until the parent file is closed. If the system is unable to obtain the required locks, constraint enforcement cannot be performed and the requested operation is not allowed. This may happen, for example, when you have just deleted a parent row with a parent key value of “X” and DB2 UDB for iSeries is trying to “cascade” that deletion to the dependent file. However, another job is actually updating the dependent row that has a foreign key value of “X” at the same time. Therefore, the DBMS cannot obtain the required locks for a cascade rule. The parent row delete request is not allowed, and the following error message is returned indicating that constraint enforcement cannot be performed: CPF502E: Referential constraints could not be validated for member... See 3.7.1, “Referential integrity I/O messages” on page 50, for further discussion on the new CPF messages associated with referential integrity. 3.5.2 Referential integrity rules ordering When a physical file is the parent file for more than one referential constraint and these constraints have different delete rules, the DBMS sequences the rules as follows: 1. The RESTRICT rule is applied first. Therefore, if at least one of the constraints has the delete rule RESTRICT, the deletion is prevented, and none of the dependent records are updated or deleted. 2. CASCADE rule 3. SET NULL rule 4. SET DEFAULT rule 5. NOACTION rule If you have a cascade network, deleting a record in the parent file causes the deletion of all the matching records in the dependent file. If the dependent file is itself a parent file in another referential constraint, the deletion might propagate actions to the lower level and so on. If any failure occurs, the system rolls back all the changes. On the other hand, when you mix different delete rules in your referential integrity network, you can delete a parent record. This is true only if no dependent file involved is a parent in a referential constraint that has the delete rule RESTRICT or NOACTION, or none of the records being deleted has any dependent record. Figure 3-5 shows this situation. Chapter 3. Referential integrity 37 Figure 3-5 Delete propagation in a referential integrity network In the referential integrity network shown in Figure 3-5, a delete operation on PF01 causes:  A deletion of the dependent records in PF11. Each of these deletions, in turn, issues: – Updating the related dependent records in PF21, and setting the foreign key values to NULL. – Deleting all of the related dependent records in the PF22 file.  By deleting the dependent records in PF12, each of these deletions, in turn, causes: – Updating the related dependent records in PF24, and setting the foreign key values to their default values. – But, for the constraint existing between PF12 and PF23, the delete rule is RESTRICT. Therefore, if the records that are about to be deleted in PF12 have dependent records in PF23, their deletion is prevented. In turn, since it cannot delete all the records in the PF12 file, the system prevents even the deletion of the original record in PF01. In this example, note these points:  The delete operation from PF01 is executed.  The cascaded deletions to PF11 and PF12 are performed. As soon as the records are deleted from PF12, the RESTRICT rule is enforced.  The system issues an error message and rolls back the previous deletions, ending the implicit commitment control cycle. On the contrary, if you do not have a RESTRICT or NOACTION rule when the user or an application issues a delete on PF01, as shown in Figure 3-6, the following actions occur:  The delete from PF01 is executed.  The cascaded deletions to PF11 and PF12 are performed.  The cascaded deletions to PF22 are performed.  The SET NULL rule on PF21 is executed.  The SET DEFAULT rule on PF24 is handled.  If any failure occurs, the system rolls back all the changes. PF01 PF21 PF22 PF11 PF23 PF24 PF12 CASCADE CASCADE SETNULL RESTRICT CASCADE SET DEFAULT 38 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 3-6 Delete propagation in a referential integrity network Whether you use a RESTRICT rule or a NOACTION rule broadly depends on your application environment needs and whether you intend to add database triggers to your database. Even if you do not intend to define any trigger on your database, you may still want to differentiate between RESTRICT and NOACTION, especially if the parent key in the referential integrity relationship is subject to operations that affect multiple rows, such as an SQL UPDATE statement. Note: For a discussion on how DB2 UDB for iSeries sequences the referential integrity rules and the execution of trigger programs, refer to Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. Consider the example shown in Figure 3-7. Figure 3-7 Impact of RESTRICT versus NOACTION If you want to update your PRICETABLE and lower all the prices by five dollars, you may use the following SQL statement: UPDATE PRICETABLE SET PRICE = PRICE - 5.00 The statement runs successfully if the referential integrity rule for the constraint shown in Figure 3-7 is NOACTION. After the first record is updated, a parent key value of $10.00 no longer exists for the INVENTORY file. However, a NOACTION rule allows the enforcement to be delayed until after all the rows have been updated. At this point, a parent key value of $10.00 exists and no constraint violation is signaled. PF01 PF21 PF22 PF11 PF24 PF12 CASCADE CASCADE SETNULL CASCADE SET DEFAULT Cheap 10.00 Bargain 15.00 Expensive 20.00 Outrageous 25.00 PRICELINE Category Price 5530 15.00 4353 10.00 1233 20.00 8163 15.00 9934 20.00 ItemNo Price Description INVENTORY FK PK Chapter 3. Referential integrity 39 3.5.3 A CASCADE example A database contains the following files:  ORDERH: Contains the Order Headers  DETAIL: Contains the items of any order  FEATURE: Contains all the features associated with the products in the DETAIL file In this case, a record cannot be inserted in FEATURE if the related product is not in the DETAIL file. Likewise, you cannot insert an order item in DETAIL if the related order header is not in ORDERH. On the other hand, when you delete an order, you should remove all the related items and all the corresponding features from the database. For this reason, you need to define two referential constraints:  The first one between FEATURE and DETAIL  The second one between DETAIL and ORDERH For both constraints, the delete rule must be CASCADE. The update rule can be either RESTRICT or NOACTION. Now, create the tables that were previously described: CREATE TABLE TEST/ORDERH (ORDER_NUMBER FOR COLUMN ORHNBR CHAR (5) NOT NULL, CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR (5) NOT NULL, ORDER_INS_DATE FOR COLUMN ORHDTE DATE NOT NULL, ORDER_DELIV_DATE FOR COLUMN ORHDLY DATE NOT NULL, ORDER_TOTAL FOR COLUMN ORHTOT DEC(11,2) NOT NULL WITH DEFAULT 0, CONSTRAINT ORDERH_KEY PRIMARY KEY (ORHNBR)) CREATE TABLE TEST/DETAIL (ORDER_NUMBER FOR COLUMN ORHNBR CHAR (5) NOT NULL, PRODUCT_NUMBER FOR COLUMN PRDNBR CHAR (5) NOT NULL, PRODUCT_QUANTITY FOR COLUMN PRDQTY DEC (5, 0) NOT NULL, PRODUCT_TOTAL FOR COLUMN PRDTOT DEC (9, 2) NOT NULL, CONSTRAINT DETAIL_KEY PRIMARY KEY (ORHNBR, PRDNBR), CONSTRAINT DETAIL_ORD FOREIGN KEY (ORHNBR) REFERENCES TEST/ORDERH (ORHNBR) ON DELETE CASCADE ON UPDATE RESTRICT) CREATE TABLE TEST/FEATURE (ORDER_NUMBER FOR COLUMN ORHNBR CHAR (5) NOT NULL, PRODUCT_NUMBER FOR COLUMN PRDNBR CHAR (5) NOT NULL, FEATURE_NUMBER FOR COLUMN FTRNBR CHAR (5) NOT NULL, FEATURE_QUANTITY FOR COLUMN FTRQTY DEC(5,0) NOT NULL, FEATURE_TOTAL FOR COLUMN FTRTOT DEC(9,2) NOT NULL, CONSTRAINT FTR_KEY PRIMARY KEY (ORHNBR, PRDNBR, FTRNBR), CONSTRAINT FTR_PRD FOREIGN KEY (ORHNBR, PRDNBR) REFERENCES TEST/DETAIL (ORHNBR,PRDNBR) ON DELETE CASCADE ON UPDATE RESTRICT) If TEST is not an SQL collection, you must explicitly start journaling the files to the same journal. The following commands create the journal and journal receiver and start journaling for ORDERH, DETAIL, and FEATURE: 40 Advanced Functions and Administration on DB2 Universal Database for iSeries CRTJRNRCV JRNRCV(mylib/JRNRCV) CRTJRN JRN(mylib/JRN) JRNRCV(mylib/JRNRCV) MNGRCV(*SYSTEM) DLTRCV(*YES) STRJRNPF FILE(TEST/ORDERH TEST/DETAIL TEST/FEATURE) JRN(mylib/JRN) You can insert a complete order interactively or through an application according to the following logic sequence: 1. Insert the order data into ORDERH. 2. Insert a product into DETAIL. If this item has features, insert the related features into FEATURE. 3. Repeat this point down to the last order item. If any error occurs during this process, issue a ROLLBACK. If all the operations end successfully, you may COMMIT the inserts. For example, you may insert the order data shown in Figure 3-10 on page 43. If you try to insert a dependent record before you insert the related parent record, the system cannot perform the insertion, and an error message is issued. In our example, the following insert statement may be performed before you insert the corresponding order header data in ORDERH: INSERT INTO TEST/DETAIL VALUES ('77120', '00200', 5, 500) In this case, the system issues the message: CPF502D: Referential constraint violation on member DETAIL. The second-level text explains that you cannot insert that record because it does not match any parent key (Figure 3-8). Chapter 3. Referential integrity 41 Figure 3-8 Inserting a foreign key that does not match any parent key value Likewise, you may try to update a row in DETAIL having matching records in FEATURE, for example: UPDATE TEST/DETAIL SET PRDNBR = '99999' WHERE PRDNBR = '00420' In this case, the system issues the following message: CPF503A: Referential constraint violation on member DETAIL. The second-level text explains that you cannot update that product number because it has depending features (Figure 3-9). Additional Message Information Message ID . . . . . . : CPF502D Severity . . . . . . . : 30 Message type . . . . . : Notify Date sent . . . . . . : 06/05/01 Time sent . . . . . . : 18:11:17 Message . . . . : Referential constraint violation on member DETAIL. Cause . . . . . : The operation being performed on member DETAIL file DETAIL in library TEST failed. Constraint DETAIL_ORD prevents record number 0 from being inserted or updated in member DETAIL of dependent file DETAIL in library TEST because a matching key value was not found in member ORDERH of parent file ORDERH in library TEST. If the record number is zero, then the error occurred on an insert operation. The constraint rule is 1. The constraint rules are: 1 -- *RESTRICT 2 -- *NOACTION Recovery . . . : Either specify a different file, change the file, or change the program. Then try your request again. More... Press Enter to continue. F3=Exit F6=Print F9=Display message details F12=Cancel F21=Select assistance level 42 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 3-9 Updating a parent key that has matching foreign keys Figure 3-10 shows how deleting from one of the files propagates to the dependent files. For example, the deletion of product number 00420 from DETAIL issues the deletion of three records in FEATURE. Deleting order number 77120 causes the deletion of three records in DETAIL. Each of these propagates the deletion to its matching records in FEATURE. With a single statement, all the matching rows in the cascade network are deleted: DELETE FROM TEST/ORDERH Here, ORHNBR = '77120'. Additional Message Information Message ID . . . . . . : CPF503A Severity . . . . . . . : 30 Message type . . . . . : Sender copy Date sent . . . . . . : 06/05/01 Time sent . . . . . . : 18:27:26 Message . . . . : Referential constraint violation on member DETAIL. Cause . . . . . : The operation being performed on member DETAIL file DETAIL in library TEST failed. Constraint FTR_PRD prevents record number 3 from being deleted or updated in member DETAIL of parent file DETAIL in library TEST because a matching key value exists in member FEATURE of dependent file FEATURE in library TEST. The constraint rule is 1. The constraint rules are: 1 -- *RESTRICT 2 -- *NOACTION Recovery . . . : Either specify a different file, change the file, or change the program. Then try your request again. Possible choices for replying to message . . . . . . . . . . . . . . . : More... Press Enter to continue. F3=Exit F6=Print F9=Display message details F12=Cancel F21=Select assistance level Chapter 3. Referential integrity 43 Figure 3-10 Example of a cascade network In a cascade network with multiple levels, DB2 UDB for iSeries implements what is called the breadth cascade as opposed to the depth cascade implemented elsewhere. In the scenario described in Figure 3-10, DB2 UDB for iSeries deletes the record from the Order Header file first. Then, it deletes all the records from the DETAIL file, and then, all the records from the FEATURE file. 3.6 Journaling and commitment control As stated in 3.3.2, “Journaling and commitment control requirements” on page 25, if a referential integrity network has update and delete rules other than RESTRICT, the DBMS requires journaling and commitment control. Again, this requirement helps DB2 UDB for iSeries ensure the atomicity of operations that change or delete multiple records due to referential constraints. Either all or none of the record operations must complete. For example, you may delete a record that activates a chain of cascaded deletes. If some failure occurs during the cascade process before the DBMS can delete all the dependent records, all the records deleted so far are undeleted and the parent and dependent files are returned to their previous state. Journaling and commitment control enable the DBMS to ensure this type of transaction atomicity. Both the parent and dependent files must be journaled and journaled to the same receiver. Technically, only the parent file needs to be journaled for NO ACTION rules. In addition, the user is responsible for starting the journaling of their physical files. ORHNBR CUSNBR ORHDTE ORHDTY ORHTOT 77120 00123 1994-05-31 1994-06-30 1300 ORDERH ORHNBR PRDNBR PRDQTY PRDTOT 77120 77120 77120 00200 00420 00500 5 10 8 500 400 400 DETAIL FK PK ORHNBR PRDNBR FTRNBR FTRQTY FTRTOT 77120 77120 77120 77120 77120 77120 00500 00500 00420 00420 00420 00200 GK004 RF321 QQ997 QQ001 RD441 YH532 1 1 1 2 1 2 50 20 60 40 10 80 FEATURE PK FK 44 Advanced Functions and Administration on DB2 Universal Database for iSeries However, the user can use system change journal management when setting up the journaling environment to offload journal management responsibilities to the system. If MNGRCV(*SYSTEM) and DLTRCV(*YES) are specified on the CRTJRN or CHGJRN commands, the system automatically manages the attachment of a new journal receiver and deletes the old receiver after the new one has been attached. Therefore, the user can choose to start journaling and let the system take care of the management work. In contrast, the system implicitly starts a commitment control cycle for the user if the delete or update rule requires commitment control whenever the current application or user is running with no commitment control. This implicit commitment control cycle is transparent to the user and application program. If any failure occurs before the update or delete operation has been carried out by the system, all the changes related to the database operation are rolled back automatically. Other changes previously made by the application are not affected by this automatic rollback. Let's consider the example shown in Figure 3-11, where the application working on those files is not using commitment control. Figure 3-11 System-started commitment control cycle When the DELETE operation is performed, DB2 UDB for iSeries activates an implicit commitment control cycle 2. If a failure occurs in 3, the records that were removed are placed back into the files. Any changes in 1 are not affected by an automatic rollback. Figure 3-12 shows the same scenario as previously described, but with a native RPG ILE program handling the delete cascade. UPDATE ..... 1 remove 77120 from ORDERH INSERT ..... 1 remove 00200 from DETAIL DELETE FROM TEST/ORDERH remove 00420 from DETAIL WHERE ORHNBR = '77120' ...... remove GK004 from FEATURE 2 remove RF321 from FEATURE ...... remove RD441 from FEATURE remove YH532 from FEATURE 3 Automatic Rollback Chapter 3. Referential integrity 45 Figure 3-12 A native application and a delete cascade 3.6.1 Referential integrity journal entries A new attribute has been added to the journal entries to identify which journal entries were created as a result of referential constraint enforcement. The term side-effect journal entries is used in this discussion to refer to these new entries. This side-effect information is identified by the new Ref Constraint (Yes/No) parameter in the Display Journal Entry Details display. If a record is deleted from a dependent file directly, the change is recorded into the journal with an entry specifying Ref Constraint is No. If the same record is deleted by DB2 UDB for iSeries as the result of enforcing a Delete CASCADE rule, the system records a side-effect journal entry having Ref Constraint set to Yes. If you consider the example in Figure 3-10 on page 43, when you delete a record from ORDERH, the system automatically removes all the related products and, for each product, all the corresponding features. When you remove order 77120, the system logs the journal entries shown in Figure 3-13: DELETE FROM TEST/ORDERH Here, ORHNBR = '77120'. FORDERH UF A E K DISK FAnother UF E K DISK COMMIT ... C UPDATE AnotherFmt 1 ... C WRITE ORDERH recordstr 1 ... ... C MOVEL '77120' keyval C keyval DELETE ORDERH 99 remove 77120 from ORDERH remove 00200 from DETAIL remove 00420 from DETAIL ...... 2 remove GK004 from FEATURE 3 remove RF321 from FEATURE ...... remove RD441 from FEATURE remove YH532 from FEATURE ... 46 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 3-13 Journal entries after deleting a parent record Referring to Figure 3-13, OP means Open Member, CL is Close Member, and DL means Delete Record. The BC entry corresponds to a Start Commitment Control operation, and the SC entry is a Start of Commit cycle (the delete action was performed with Commitment Control level *CHG). After the parent file is opened and the commitment control cycle is started, an application first deletes the parent record (Entry# - 2309). The DBMS then gains control and enforces the associated delete CASCADE rules, causing all the matching rows in the dependent file (all the products) and eventually all the features related to the products to be deleted. Side-effect journal entries (2311 through 2320) are logged as a result of the constraint enforcement performed by DB2 UDB for iSeries. If you use option 5 on the DL entry for the ORDERH file, the complete entry for the explicit parent key delete is shown in Figure 3-14. Display Journal Entries Journal . . . . . . : QSQJRN Library . . . . . . : TEST Type options, press Enter. 5=Display entire entry Opt Sequence Code Type Object Library Job Time 2304 F OP ORDERH TEST P23KRZ75D 22:37:02 2305 C BC P23KRZ75D 22:37:03 2306 C SC P23KRZ75D 22:37:03 2309 R DL ORDERH TEST P23KRZ75D 22:37:03 2311 R DL DETAIL TEST P23KRZ75D 22:37:04 2312 R DL DETAIL TEST P23KRZ75D 22:37:04 2313 R DL DETAIL TEST P23KRZ75D 22:37:04 2315 R DL FEATURE TEST P23KRZ75D 22:37:05 2316 R DL FEATURE TEST P23KRZ75D 22:37:05 2317 R DL FEATURE TEST P23KRZ75D 22:37:05 2318 R DL FEATURE TEST P23KRZ75D 22:37:05 2319 R DL FEATURE TEST P23KRZ75D 22:37:05 2320 R DL FEATURE TEST P23KRZ75D 22:37:05 2322 F CL ORDERH TEST P23KRZ75D 22:37:05 2323 C EC P23KRZ75D 22:37:06 Note: Until the parent file is closed (entry 2322) in this delete cascade operation, you cannot run the Change Journal (CHGJRN) command on this journal. This is due to the fact that the system requires all of the files involved in this logical transaction to be closed so that a synchronization point can be established for this journal. After this synchronization point is established, the system de-allocates the journal, making it available to any system function. Chapter 3. Referential integrity 47 Figure 3-14 Journal entry information for a delete cascade operation The corresponding entry details are shown in Figure 3-15. Figure 3-15 Application-related journal entry As shown in bold in Figure 3-15, the system reports that this delete operation was not the result of referential constraint enforcement. Display Journal Entry Object . . . . . . . : ORDERH Library . . . . . . : TEST Member . . . . . . . : ORDERH Sequence . . . . . . : 2309 Code . . . . . . . . : R - Operation on specific record Type . . . . . . . . : DL - Record deleted Entry specific data Column *...+....1....+....2....+....3....+....4....+....5 00001 '77120001232001-05-312001-06-30 ' Bottom Press Enter to continue. F3=Exit F6=Display only entry specific data F10=Display only entry details F12=Cancel F24=More keys Display Journal Entry Details Journal . . . . . . : QSQJRN Library . . . . . . : TEST Sequence . . . . . . : 2309 Code . . . . . . . . : R - Operation on specific record Type . . . . . . . . : DL - Record deleted Object . . . . . . . : ORDERT Library . . . . . . : TEST Member . . . . . . . : ORDERT Flag . . . . . . . . : 1 Date . . . . . . . . : 06/07/01 Time . . . . . . . . : 22:37:03 Count/RRN . . . . . : 2 Program . . . . . . : QCMD Job . . . . . . . . : 005547/ITSCID07/P23KRZ75D User profile . . . . : USERID07 Ref Constraint . . . : No Commit cycle ID . . : 2306 Trigger . . . . . . : No Press Enter to continue. F3=Exit F10=Display entry F12=Cancel F14=Display previous entry F15=Display only entry specific data 48 Advanced Functions and Administration on DB2 Universal Database for iSeries In contrast, the side-effect entry details all specify Ref Constraint as yes. For example, the complete entry 2311 is shown in Figure 3-16. Figure 3-16 Journal entry information for a dependent record This deletes product 00420. The corresponding detailed information is shown in Figure 3-17. Figure 3-17 Journal entry details for a referential integrity side-effect journal entry Display Journal Entry Object . . . . . . . : DETAIL Library . . . . . . : TEST Member . . . . . . . : DETAIL Sequence . . . . . . : 2311 Code . . . . . . . . : R - Operation on specific record Type . . . . . . . . : DL - Record deleted Entry specific data Column *...+....1....+....2....+....3....+....4....+....5 00001 '7712000420 ' Bottom Press Enter to continue. F3=Exit F6=Display only entry specific data F10=Display only entry details F12=Cancel F24=More keys Display Journal Entry Details Journal . . . . . . : QSQJRN Library . . . . . . : TEST Sequence . . . . . . : 2311 Code . . . . . . . . : R - Operation on specific record Type . . . . . . . . : DL - Record deleted Object . . . . . . . : DETAIL Library . . . . . . : TEST Member . . . . . . . : DETAIL Flag . . . . . . . . : 1 Date . . . . . . . . : 06/07/01 Time . . . . . . . . : 22:37:04 Count/RRN . . . . . : 3 Program . . . . . . : QCMD Job . . . . . . . . : 005547/ITSCID07/P23KRZ75D User profile . . . . : USERID07 Ref Constraint . . . : Yes Commit cycle ID . . : 2306 Trigger . . . . . . : No Press Enter to continue. F3=Exit F10=Display entry F12=Cancel F14=Display previous entry F15=Display only entry specific data Chapter 3. Referential integrity 49 Notice that the field marked in bold in Figure 3-17 means that this delete operation was performed by the DBMS due to referential constraint enforcement. 3.6.2 Applying journal changes and referential integrity When you apply or remove journal changes, DB2 UDB for iSeries does not allow referential constraints to prevent the recovery of your database files. Although each apply or remove change is allowed, the associated referential constraints are constantly verified to prevent you from violating the referential integrity of your database. If the journal change violates referential integrity, the constraint is marked as check pending, and the system continues on to the next journal entry. See the check pending discussion in 3.8.1, “Constraint states” on page 52. Moreover, during the process of applying or removing journal changes, update and delete rules are ignored. If you have a cascade delete rule, for example, removing a record from the parent file does not remove any of the dependent records. This is because the dependent record changes are also recorded in your journal with the side-effect journal entries discussed in 3.6.1, “Referential integrity journal entries” on page 45. These entries can be applied as well. This design allows you to use the journal entries to recover your database files to a known state without violating the integrity of your database. To avoid check pending situations, you must apply or remove journal changes on all files in your referential integrity network to ensure that your related parent and dependent files are recovered to the same data level. Consider the example in Figure 3-10 on page 43. If you experience a data loss, you may need to restore all the files in the referential integrity network. When you apply the journal changes, include all the files involved in the referential integrity network: APYJRNCHG JRN(TEST/QSQJRN) FILE((*ALL)) CMTBDY(*YES) This way, you are protected from check pending conditions and from data inconsistencies. On the other hand, if you apply the journal entries only to ORDERH, order 77120 is deleted, but all the related products are still in the database. The system allows you to apply the journal changes with the following command: APYJRNCHG JRN(TEST/QSQJRN) FILE((TEST/ORDERH)) CMTBDY(*YES) The DETAIL_ORD constraint (between ORDERH and DETAIL) is found in the established or enabled state, with a check pending status of YES. To bring the two files back to the same data level, you may also apply the journal changes to the other files in the network. Consider our example, DETAIL and FEATURE: APYJRNCHG JRN(TEST/QSQJRN) FILE((TEST/DETAIL) (TEST/FEATURE)) CMTBDY(*YES) At this point, you have to re-enable the constraints so that the system can re-verify this relationship. 50 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 3-18 summarizes the database changes that can cause a check pending condition (marked with CP) when they are applied through an Apply Journal Changes (APYJRNCHG) command only to a parent or only to a dependent file and, similarly, when they are removed from some, but not all, of the network files. Figure 3-18 Check pending after APYJRNCHG Always apply or remove journal entries within commit boundaries, starting from the beginning of a logical unit of work down to the end of a logical unit of work, because the system guarantees the data consistency within the commit boundaries. Therefore, when you apply journal changes, set the CMTBDY value to *YES in the APYJRNCHG command. 3.7 Referential integrity application impact Before referential integrity is implemented, referential integrity validations must be performed by the application program. Now you can let DB2 UDB for iSeries ensure your data integrity through the referential integrity constraints. As mentioned earlier, using referential integrity may improve your application performance. The integrity checks are much more efficient and quicker when performed at the operating system level rather than by an application. However, once a programmer has defined referential constraints to the DBMS, the existing integrity checks should be removed from the application program. Otherwise, the application performance will degrade because the same checking is being performed twice (at the application level and at the system level). The application programmer must also consider the fact that, once the referential integrity constraints are defined to the DBMS, referential integrity enforcement is performed at all times on all interfaces. If you have applications that only need the data to be consistent at specific points in time or applications where the inconsistency is accepted because another program will correct it, DBMS referential constraints may prevent these applications from running smoothly. A programmer must verify that the DBMS-supported referential integrity matches the integrity and business rules currently enforced by their applications. 3.7.1 Referential integrity I/O messages Several new error messages have been defined to handle the errors occurring during referential integrity enforcement. Instead of coding integrity checks into your application programs, coding is now needed to handle the new referential integrity error conditions that can be raised by DB2 UDB for iSeries during referential constraint enforcement. APY RMV Insert CP - - Update CP CP Delete - - CP On dependent files APY RMV Insert - - CP Update CP CP Delete CP - - On parent files Chapter 3. Referential integrity 51 Notify messages There are three new notify messages for referential integrity errors:  CPF502D: Referential constraint violation member This message is issued when the user or the application tries to insert or update a foreign key, and a matching parent key value does not exist.  CPF502E: Referential constraints could not be validated for member This message is issued when the system cannot validate a referential constraint because of a record or a file lock.  CPF503A: Referential constraint violation on member This message is issued when the delete rule is NOACTION or RESTRICT and the user or the application tries to delete or update a parent key having matching foreign key values. These messages have a severity level of 30, and the default reply is “Cancel”. Escape messages There are two new escape messages for referential integrity errors:  CPF523B: Referential constraint error processing member This message is issued when the system cannot enforce a referential constraint.  CPF523C: Referential constraints journal error This message is issued when the system cannot enforce a referential constraint because the corresponding parent and dependent files are not journaled, or they are not journaled to the same journal. Both messages have a severity level of 30 and fall into the range of escape messages that are unrecoverable. 3.7.2 Handling referential integrity messages in applications To handle these messages, new file status codes have been provided for ILE languages. In the original program model (OPM) environment, any message due to errors in referential integrity enforcement maps to the existing I/O error status codes “01299” for RPG/400 and “90” for COBOL/400. Referential integrity messages in ILE RPG programs You can check the new status “01222” if you want to handle the CPF502E message. There is also a corresponding inquiry message RNQ1222 and a corresponding escape message RNX1222. Both of them have severity level 99 and the following text: Unable to allocate a record in file &7 due to referential constraint error (R C G D F). Status “01022” handles CPF502D and CPF503A. There is also a corresponding inquiry message RNQ1022 and a corresponding escape message RNX1022. Both of them have severity level 99 and the following text: Referential constraint error on file &7. The existing status code “01299” and the corresponding inquiry message RNQ1299 and escape message RNX1299 are used to handle escape messages CPF523C and CPF523B. 52 Advanced Functions and Administration on DB2 Universal Database for iSeries Referential integrity messages in ILE COBOL programs Status “9R” handles all the notify messages previously listed for referential integrity exceptions. Both escape messages are handled by the status code “90” set for the exceptions in the CPF5200 range. Referential integrity messages in ILE C programs ILE/C maps these messages to the existing error number values. SQLCODE values mapping referential integrity messages The SQLCODE values are:  SQLCODE 530 handles the notify message CPF502D.  SQLCODE 531 indicates that you are updating a parent key with matching dependent records.  SQLCODE 532 indicates that you are deleting a parent key with matching dependent records. See Appendix B, “Referential integrity: Error handling example” on page 337, for a coding example about error handling when using referential integrity. 3.8 Referential integrity constraint management This section describes:  Constraint states  Check pending condition  Commands you can use to manage referential integrity constraints  Save and restore  How to obtain information about referential integrity constraints 3.8.1 Constraint states A referential constraint can be in one of the following states:  DEFINED state: The constraint definition exists at the file level, but the constraint is not enforced. Defined constraints are purely by definition and not by function. The file members do not have to exist for the constraint to be defined. – Defined/enabled: A constraint that remains enabled when it is moved to the established state – Defined/disabled: A constraint that remains disabled when it is moved to the established state  ESTABLISHED state: A referential constraint is established when the foreign key attributes match those of the parent key and both files contain a member. The constraint has now been formally created in the DBMS. In this state, the constraint can be: – Established/enabled: DB2 UDB for iSeries enforces referential integrity for this constraint. – Established/disabled: DB2 UDB for iSeries does not enforce referential integrity for a constraint in this state. However, the access paths associated with the constraint are still maintained. See Database Programming, SC41-5701, for a complete discussion of constraint states. Chapter 3. Referential integrity 53 3.8.2 Check pending A referential constraint is placed in check pending status if the DBMS determines that mismatches may exist between the parent and foreign keys. The check pending status only applies to referential constraints in the established/enabled state. There are several operations that can cause a check pending condition:  Adding referential constraints to existing files with invalid data  Abnormal system failures  Save/restore operations  Apply/remove journal changes When a referential constraint relationship has been marked as check pending, the associated parent and dependent files can be opened, but the system imposes some restrictions on the I/O operations to those files:  Only read and insert operations are allowed on the parent file.  No I/O operations are allowed on the dependent file. The system imposes these restrictions to ensure that applications and users are not accessing and changing records that are possibly inconsistent and, therefore, violating referential integrity. To move a constraint relationship out of check pending, you must use disable (CHGPFCST) to disable the constraint that allows any I/O operations to be performed on the parent and dependent file. You can then correct your parent and foreign key values so that they again meet referential integrity. Once the data corrections are completed, you can enable the constraint that causes DB2 UDB for iSeries to process and verify that every non-null foreign key value is valid. If this verification finds mismatches, the relationship is again marked as check pending and the process repeats itself. The check pending status of a file can be determined with the Work with Physical File Constraints (WRKPFCST) command (refer to Figure 3-20 on page 57) and the Display Physical File Description (DSPFD) command (refer to Figure 3-23 on page 62). 3.8.3 Constraint commands The commands provided to manage referential integrity constraints are:  Change Physical File Constraint (CHGPFCST)  Display Check Pending Constraint (DSPCPCST)  Work with Physical File Constraints (WRKPFCST)  Edit Check Pending Constraint (EDTCPCST)  Remove Physical File Constraint (RMVPFCST) CHGPFCST command The Change Physical File Constraint (CHGPFCST) command provides a way to:  Enable a referential constraint: Enable causes the system to verify the data integrity of the specified constraint (for example, every non-null foreign key value has a matching parent key). If the verification is successful, the referential constraint is enforced by DB2 UDB for iSeries. Remember that this enable process may not be a short-running operation when the associated files contain a large number of records. 54 Advanced Functions and Administration on DB2 Universal Database for iSeries  Disable a referential constraint: Disabling a constraint essentially turns off referential integrity for that constraint relationship. Although the constraint is still defined in the DBMS, the DBMS no longer enforces referential integrity for the disabled constraint relationship. Any I/O operation is allowed on the parent and dependent file, even if that operation violates referential integrity. As mentioned in the check pending section, the disable option is used with check pending constraints so that users can clean up their parent and foreign key data before having the system re-verify the constraint. Disabling a constraint can allow faster file I/O operations in performance-critical situations. However, you must consider the trade-off in this situation. While the constraint is disabled, the data can violate referential integrity, and you are unaware of the violation until the constraint is re-enabled. In addition, you must wait for the system to re-verify all of your foreign key values on the re-enable. To limit your data integrity exposure when a constraint is disabled, first use the Allocate Object (ALCOBJ) command to exclusively lock the files associated with the constraint to be disabled. This allocation prevents other users from changing the file data while the constraint is disabled. Then, use the De-allocate Object (DLCOBJ) command to free the files once the referential constraint has been re-enabled. Before enabling or disabling a constraint, the system obtains:  Exclusive allow-read locks on the parent file, member, and access paths  Exclusive no-read locks on the dependent file, member, and access paths These locks are released at the end of the CHGPFCST command. DSPCPCST command The Display Check Pending Status (DSPCPCST) command can be used on referential constraints that are in a disabled state to display which records in the dependent file do not have matching parent key values, thereby causing the check pending condition. The following example shows how the DSCPCPCST output can be used to fix a constraint that is currently marked as check pending. In the Order Entry database, we define a referential constraint ORDERHDR_CNBR (Parent Key and foreign key is the Customer_Number field in both files) between existing CUSTOMER and ORDERHDR files having the contents listed in Table 3-1 and Table 3-2. Table 3-1 CUSTOMER table CUSTOMER_NUMBER CUSTOMER_NAME ... 10509 Benson Mary 15006 Smith Steven ... 14030 Peterson Robert ... 13007 Robinson Richard ... 21603 White Paul ... Chapter 3. Referential integrity 55 Table 3-2 ORDERHDR table The constraint is marked as check pending because ORDERHDR contains records related to Customer 12312, which does not exist in the CUSTOMER file. In this case, follow these steps: 1. Lock up your referential integrity network with the ALCOBJ command while you are correcting your parent and foreign key data: ALCOBJ OBJ((CUSTOMER *FILE *EXCL *FIRST) (ORDERHDR *FILE *EXCL *FIRST)) 2. If the constraint is not yet disabled, disable the constraint so that the DSPCPCST command can read the dependent file: CHGPFCST FILE(ORDERHDR) CST(ORDERHDR_CNBR) STATE(*DISABLED) 3. Display which records in ORDERHDR have a customer number that does not exist in the CUSTOMER file: DSPCPCST FILE(ORDENTL/ORDERHDR) CST(ORDERHDR_CNBR) The output of this command is shown in Figure 3-19. Figure 3-19 DPSCPCST output 4. According to the DSPCPCST output, clean up your foreign and parent keys value. In this case, it appears that Customer 12312 needs to be added to the CUSTOMER file. 5. Once the data is corrected, enable the constraint so that the DBMS can verify that your parent and foreign key data is now in sync: CHGPFCST FILE(ORDERHDR) CST(ORDERHDR_CNBR) STATE(*ENABLED) ORDER_NUMBER ... CUSTOMER_NUMBER ORDER_DATE 00010 ... 10509 05/08/01 00020 ... 10509 05/09/01 02020 ... 12312 02/03/01 02021 ... 12312 04/13/01 02022 ... 12312 04/25/01 Display Report Width . . .: 142 Column . .: 1 Control . . . . Line ....+....1....+....2....+....3....+....4....+ .......... ORDER_NUMBER CUSTOMER_NUMBER ORDER_DATE ------------ --------------- ---------- 000001 02020 12312 02/03/2001 000002 02021 12312 04/13/2001 .... 000003 02022 12312 04/25/2001 ****** * * * * * E N D O F D A T A * * * * * 56 Advanced Functions and Administration on DB2 Universal Database for iSeries 6. Now that the constraint has been successfully enabled, release the locks on your referential integrity network with the DLCOBJ command: DLCOBJ OBJ((CUSTOMER *FILE *EXCL *FIRST) (ORDERHDR *FILE *EXCL *FIRST)) 3.8.4 Removing a constraint This section shows how to remove physical file (or table) constraints. Both the native and SQL interfaces can be used to remove file constraints:  The native interface provides the Remove Physical File Constraint (RMVPFCST) command.  The SQL interface allows you to remove an existing constraint from a file through the DROP clause of the ALTER TABLE statement. The SQL interface supports the removal of one constraint at a time. The following statement removes the customer_key constraint from the CUSTOMER file: ALTER TABLE mylib/CUSTOMER DROP CONSTRAINT customer_key The following statements remove (respectively) the primary key, the constraint_name unique constraint, and the constraint_name referential constraint from the CUSTOMER file: ALTER TABLE mylib/CUSTOMER DROP PRIMARY KEY ALTER TABLE mylib/CUSTOMER DROP UNIQUE constraint_name ALTER TABLE mylib/CUSTOMER DROP FOREIGN KEY constraint_name In contrast, the native interface allows you to remove more than one constraint at a time. In addition, you can sub-select the physical file constraints you want to remove by specifying the option that only referential constraints, marked as check pending, should be removed. Let's examine the impact of the RMVPFCST command according to the different values of its parameters. The following statement removes the constraint_name constraint from CUSTOMER file: RMVPFCST FILE(mylib/CUSTOMER) CST(constraint_name) TYPE(constraint_type) If CST(*CHKPND) is specified, all the referential constraints in the check pending condition are removed, regardless of the value of the TYPE parameter. The following statement removes all the constraint_type constraints from the CUSTOMER file in mylib: RMVPFCST FILE(mylib/CUSTOMER) CST(*ALL) TYPE(constraint_type) In this case, the system removes the unique or referential constraints following the sequence in which they have been created: RMVPFCST FILE(mylib/CUSTOMER) CST(*ALL) Chapter 3. Referential integrity 57 The RMVPFCST statement removes all the constraints defined over the CUSTOMER file in mylib, including the damaged constraints since the TYPE default value is *ALL. In this case, the system removes the primary key constraint first, then all of the unique constraints (in their creation sequence), and finally, all of the referential constraints (in their creation sequence). WRKPFCST command The Work with Physical File Constraints (WRKPFCST) command is similar to the other Control Language Work commands. With this command, you can gain access to most of the constraint operations from a single display. The WRKPFCST command lets you see one or all the physical file constraints defined over one or more files, depending on the values you set for the WRKPFCST parameters. Figure 3-20 displays the sample output from the WRKPFCST command. Figure 3-20 Work with Physical File Constraints display On this display, you can:  Change the state of constraints (option 2): This option invokes the CHGPFCST command (see “CHGPFCST command” on page 53).  Remove a constraint (option 4): This option invokes the RMVPFCST command (see 3.8.4, “Removing a constraint” on page 56, for more details).  Display constraints in check pending status (option 6): This option executes the DSPCPCST command (see “DSPCPCST command” on page 54). The state column lists the status of the referential constraints: defined or established and enabled or disabled. The check pending status column displays which constraints are currently in check pending. Disabled constraints are always shown as being in check pending condition although check pending does not apply to disabled constraints. Work with Physical File Constraints Type options, press Enter. 2=Change 4=Remove 6=Display records in check pending Check Opt Constraint File Library Type State Pending CUSTOMER_K > CUSTOMER ORDENTL *PRIKEY ORDDTL_KEY ORDERDTL ORDENTL *PRIKEY ORDDTL_HOR > ORDERDTL ORDENTL *REFCST EST/ENB NO ORDERHDR_K > ORDERHDR ORDENTL *PRIKEY ORDERHDR_C > ORDERHDR ORDENTL *REFCST EST/ENB YES SALESREP_K > SALESCUS ORDENTL *PRIKEY SALESREP_C > SALESCUS ORDENTL *REFCST EST/ENB NO STOCK_KEY STOCK ORDENTR *PRIKEY STOCK_SNBR STOCK ORDENTR *REFCST EST/ENB NO SUPPLIER_K > SUPPLIER ORDENTR *PRIKEY Parameters for options 2, 4, 6 or command Bottom ===> F3=Exit F4=Prompt F5=Refresh F12=Cancel F15=Sort by F16=Repeat position to F17=Position to F22=Display constraint name 58 Advanced Functions and Administration on DB2 Universal Database for iSeries EDTCPCST command The Edit Check Pending Constraints (EDTCPCST) command allows you to manage the verification of referential constraints that have been marked as check pending. The system displays the constraints marked as check pending and the estimated time it takes the system to verify the constraint once the parent and foreign key data have been corrected. From our previous example (Figure 3-20), the corresponding EDTCPCST display output is shown in Figure 3-21 with the ORDERHDR_CNBR constraint that was placed in check pending status. Figure 3-21 Edit Check Pending Constraints display From this display, you can set a sequence for the constraints verification. You can also delay the verify process to a later time, specifying *HLD on the sequence field. DB2 UDB for iSeries starts verifying the constraints right after you specify the sequence. The elapsed time since the beginning of the process is also displayed. During this process, the constraint status is set to RUN. Other constraints waiting for verification are marked with READY. Verifying at IPL time The Edit Check Pending Constraints display (Figure 3-22) is shown during a manual mode IPL if there are constraints in check pending condition. Edit Check Pending Constraints SYSTEM03 05/14/01 18:39:36 Type sequence, press Enter. Sequence: 1-99, *HLD ----------Constraints----------- Verify Elapsed Seq Status Cst File Library Time Time 1 RUN STOCK > STOCK ORDENTR 00:10:00 00:02:40 2 READY SALES > SALESCUS ORDENTL 00:01:48 00:00:00 *HLD CHKPND ORDER > ORDERHDR ORDENTL 00:00:01 00:00:00 Bottom F3=Exit F5=Refresh F12=Cancel F13=Repeat all F15=Sort by F16=Repeat position to F17=Position to F22=Display constraint name Chapter 3. Referential integrity 59 Figure 3-22 Editing Check Pending Constraint display at IPL time On this display, you have three alternatives:  If you want the system to suspend the IPL and verify a constraint at this time, for that constraint, you have to type a Sequence value less than or equal to the IPL threshold number.  If you need the system to verify a constraint after the IPL, you have to use a sequence value greater than the threshold. The IPL then continues, and at the IPL completion, the system automatically starts verifying that constraint.  If you want to handle the check pending condition by yourself during the normal activity, hold the constraint verification by leaving the Sequence value set to *HLD. If several constraints must be verified at the same time, either during IPL or at the end, you can specify an ordering sequence for them by inserting ordered values into the Sequence field. 3.8.5 Save and restore considerations As mentioned in 3.3.3, “Referential integrity and access paths” on page 25, when a set of database files is saved, all the physical file constraints and associated access paths are saved as well. At restore time, the system attempts to re-establish the constraints for the user. During the restore operation, the system determines whether the parent and dependent files associated with the referential constraints are at the same data level (in other words, at the same integrity level according to their constraints). If the system determines that the related files and constraints are not at the same level, the constraint relationship is marked as check pending. The system does not spend time verifying every foreign key value during the restore. It only checks the data level of the associated files. This data level verification is much quicker than the DBMS verification of every foreign key value and still preserves referential integrity. Edit Check Pending Constraints SYSTEM03 05/24/01 11:14:25 IPL threshold . . . . . . . . . . . . . 50 0-99 Type sequence, press Enter. Sequence: 1-99, *HLD ----------Constraints----------- Verify Elapsed Seq Status Cst File Library Time Time *HLD CHKPND ORDER > ORDERHDR ORDENTL 00:45:30 00:05:15 *HLD CHKPND SALES > SALESCUS ORDENTL 00:01:43 00:00:36 *HLD CHKPND STOCK > STOCK ORDENTR 00:00:25 00:00:05 Bottom F5=Refresh F13=Repeat all F15=Sort by F16=Repeat position to F17=Position to F22=Display constraint name 60 Advanced Functions and Administration on DB2 Universal Database for iSeries Other DBMS automatically either place the constraints in check pending or verify every foreign key value when you load backup copies of your database files onto the system. DB2 UDB for iSeries gives you the benefit of the doubt when restoring database backups. For example, you always save both your parent and dependent files every Monday night. A system failure on Thursday necessitates that you load the backup tape copies of your dependent and parent file. DB2 UDB for iSeries then quickly verifies that the dependent and parent files being restored are at the same data level (which is true since they were backed up together) and leaves the referential integrity constraint in a valid state. This allows you to move your backup onto the system as quickly as possible while still guaranteeing referential integrity. Here’s an example of DB2 UDB for iSeries protecting your data integrity. You restore a version of the dependent file without restoring the corresponding version of the parent file. This only leads to a check pending condition when some parent records have changed since the save operation took place, which now causes your parent and newly restored dependent files to be at different data levels. For this example, we assume that some parent records have changed since the save operation. The associated referential constraint is marked as check pending since data inconsistencies may exist due to the different data levels detected by the DBMS. You are responsible for cleaning up this check pending situation before users and applications can fully access these files. To avoid check pending and the associated recovery work, always save your referential integrity network in the same save request. This keeps the associated parent and dependent files at the same level so that you can restore the network with one request. When your referential integrity network is split across different libraries, you cannot save and restore the network with a single request. In this case, you need to prevent other jobs from changing your file data levels during your multiple request save or restore operation by using the Allocate Object (ALCOBJ) command to lock up your referential integrity network. Here's an example of the steps to follow in this situation:  When saving your referential integrity network: a. Allocate the files you have to save with the ALCOBJ command, and set Lock State to *EXCL. b. Save your network. c. Release the locks on the files by using the De-allocate Object (DLCOBJ) command.  When restoring your referential integrity network: a. Allocate the libraries your files are restored into by using the ALCOBJ command and setting Lock State to *EXCLRD. b. Restore your files in any sequence. c. Release the locks previously established on the libraries using the DLCOBJ command. When a dependent file is restored and the parent file is still missing, the constraint is left in a defined/enabled state. As soon as the parent file is restored, the constraint is established and the data levels immediately are verified. Therefore, the parent and dependent files can be restored in any sequence while still avoiding check pending. When you restore files belonging to a referential integrity network, the system can determine whether the files are at different data levels for every single constraint. Restoring files at different data levels may result in a mix of check pending and non-check pending constraints. Only the constraints potentially affected by the database changes that caused the data level mismatch are put into check pending. Chapter 3. Referential integrity 61 If you restore a database file over an existing one, the existing constraints are preserved. For example, if you remove some constraints from the file currently on the system, the additional constraints saved on the media are not restored. 3.8.6 Restore and journal apply: An example Consider the example described in 3.5.3, “A CASCADE example” on page 39. You may want to save this referential integrity network. Since all the files are in the same library, issue a single save request: SAVOBJ OBJ(ORDERH DETAIL FEATURE) LIB(TEST) DEV(device) OBJTYPE(*FILE) Consider the example where a system failure has caused you to lose the DETAIL file, and you now need to recover your referential integrity network. Follow these steps: 1. Allocate the involved files to avoid changes by other jobs. Use the ALCOBJ command with Lock Type set to *EXCL to prevent other users from reading inconsistent data. 2. Restore all of the referential integrity network: RSTOBJ OBJ(ORDERH DETAIL FEATURE) SAVLIB(TEST) DEV(device) OBJTYPE(*FILE) 3. Apply journal changes to all of the involved files: APYJRNCHG JRN(TEST/QSQJRN) FILE((TEST/ORDERH) (TEST/DETAIL) (TEST/FEATURE)) CMTBDY(*YES) 4. De-allocate the ORDERH, DETAIL, and FEATURE files. For details on journaling, commitment control, and applying journal entries, see Backup and Recovery Guide - Advanced, SC41-3305. 3.8.7 Displaying constraint information You can display or output the constraints and their related attributes and states for a file in the following ways:  Run the Display Physical File Description (DSPFD) command  Run the Display Database Relations (DSPDBR) command  Query the system catalog tables DSPFD and DSPDBR commands The DSPFD command also provides a complete description of all the constraints defined for a file. You can select this specific information by specifying: DSPFD FILE(ORDENTL/ORDERHDR) TYPE(*CST) This command shows which constraints are defined for the ORDERHDR file and their description as shown in Figure 3-23. 62 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 3-23 Physical file constraints from DSPFD In the Constraint Description section (highlighted in bold) in Figure 3-23, all of the parameter values set through the ADDPFCST command or ALTER TABLE/CREATE TABLE statements for each constraint are listed. The DSPFD command issued for a given file shows a referential constraint definition only for the parent file. To determine which referential constraints refer to this file as a parent file, you must use the DSPDBR command. This command lists these constraints in the Dependent Files section, where some new information has been added to differentiate among referential constraints, logical files, SQL indexes, or SQL views. Figure 3-24 shows this information for the ORDERHDR file. Display Spooled File File . . . . . : QPDSPFD Page/Line 1/1 Control . . . . . Columns 1 - 78 Find . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+... 5/16/01 Display File Description DSPFD Command Input File . . . . . . . . . . . . . . . . . . . : FILE ORDERHDR Library . . . . . . . . . . . . . . . . . : ORDENTL Type of information . . . . . . . . . . . . : TYPE File attributes . . . . . . . . . . . . . . : FILEATR *ALL System . . . . . . . . . . . . . . . . . . : SYSTEM *LCL File Description Header File . . . . . . . . . . . . . . . . . . . : FILE ORDERHDR Library . . . . . . . . . . . . . . . . . . : ORDENTL Type of file . . . . . . . . . . . . . . . : Physical File type . . . . . . . . . . . . . . . . . : FILETYPE *DATA Auxiliary storage pool ID . . . . . . . . . : 01 Constraint Description Primary Key Constraint Constraint . . . . . . . . . . . . . . : CST ORDERHKEY Type . . . . . . . . . . . . . . . . : TYPE *PRIMARY Key . . . . . . . . . . . . . . . . . : KEY ORHNBR Number of fields in key . . . . . . . : 1 Key length . . . . . . . . . . . . . : 5 Referential Constraint Constraint . . . . . . . . . . . . . . : CST ORDERHDRCNBR Type . . . . . . . . . . . . . . . . : TYPE *REFCST Check pending . . . . . . . . . . . . : NO Constraint state . . . . . . . . . . : STATE ESTABLISHED *ENABLED Parent File Description File . . . . . . . . . . . . . . . . : PRNFILE CUSTOMER Library . . . . . . . . . . . . . . : LIB ORDENTL Parent key . . . . . . . . . . . . . : PRNKEY CUSNBR Foreign key . . . . . . . . . . . . . . : FRNKEY CUSNBR Delete rule . . . . . . . . . . . . . . : DLTRULE *RESTRICT Update rule . . . . . . . . . . . . . . : UPDRULE *RESTRICT Chapter 3. Referential integrity 63 Figure 3-24 Referential constraints from DSPDBR on the parent file As you can see by comparing the Constraint Description line (in bold) from Figure 3-23 and the last line in bold in Figure 3-24, the DSPFD and DSPDBR commands provide complete information about the constraints involving the physical files in question. Catalog inquiry DB2 UDB for iSeries provides a system-wide catalog. The SQL catalog is a set of views in the QSYS2 library built over the cross-reference files where DB2 UDB for iSeries maintains all information related to the structure and the contents of all database files. The catalog also keeps information related to the physical file constraints. You can retrieve any information you need about the constraints defined over your database files using the system views provided in the QSYS2 library:  SYSCST: General information about constraints. The underlying catalog tables are QADBFCST and QADBXREF.  SYSCSTCOL: Information about the columns referenced in a constraint. This is a view defined over the QADBCCST and QADBIFLD catalog tables.  SYSCSTDEP: Information about the constraint dependencies on tables. The catalog tables involved are QADBFCST and QADBXREF.  SYSKEYCST: Information about the primary, unique, and foreign keys. The underlying catalog tables are QADBCCST, QADBIFLD, and QADBFCST.  SYSREFCST: Information about referential constraints from the cross-reference file table QADBFCST. Display Spooled File File . . . . . : QPDSPDBR Page/Line 1/1 Control . . . . . Columns 1 - 78 Find . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+... 5/16/01 Display Data Base Relations DSPDBR Command Input File . . . . . . . . . . . . . . . . . . . : FILE ORDERHDR Library . . . . . . . . . . . . . . . . . : ORDENTL Member . . . . . . . . . . . . . . . . . . : MBR *NONE Record format . . . . . . . . . . . . . . . : RCDFMT *NONE Output . . . . . . . . . . . . . . . . . . : OUTPUT * Specifications Type of file . . . . . . . . . . . . . . . : Physical File . . . . . . . . . . . . . . . . . . . : ORDERHDR Library . . . . . . . . . . . . . . . . . : ORDENTL Member . . . . . . . . . . . . . . . . . : *NONE Record format . . . . . . . . . . . . . . : *NONE Number of dependent files . . . . . . . . : Files Dependent On Specified File Dependent File Library Dependency JREF Constraint SALE ORDENTL Data TOTALSALE ORDENTL Data YEARSALE ORDENTL Data ORDERDTL ORDENTL Constraint ORDERHDRNUM Bottom F3=Exit F12=Cancel F19=Left F20=Right F24=More keys 64 Advanced Functions and Administration on DB2 Universal Database for iSeries Consider this example: SELECT * FROM SYSCST WHERE TABLE_NAME = 'ORDERDTL' AND TABLE_SCHEMA = 'ORDENTL' This query returns information at the constraint level about the constraints defined over the ORDERDTL file in the ORDENTL library. The most significant details are shown in Figure 3-25. Figure 3-25 Constraint information To see which fields constitute the key of the constraint, you have to query SYSCSTCOL. Use the previous example: SELECT * FROM SYSCSTCOL WHERE TABLE_NAME = 'ORDERDTL' AND TABLE_SCHEMA = 'ORDENTL' ORDER BY CONSTRAINT_NAME This query returns the names of the fields forming the various constraint keys of ORDERDTL file. See Figure 3-26. Figure 3-26 Constraint column information The catalog table, SYSKEYCST, keeps more detailed information regarding key fields in a physical file constraint, such as the ordinal position of the field, in the key, and this position in the table layout: SELECT *FROM SYSKEYCST WHERE CONSTRAINT_SCHEMA = 'ORDENTL' AND CONSTRAINT_NAME = 'ORDERDTL_KEYS' AND TABLE_SCHEMA = 'ORDENTL' AND TABLE_NAME = 'ORDERDTL' This statement returns the information shown in Figure 3-27. Figure 3-27 Detailed constraint key information For a referential constraint, detailed information can be selected from SYSREFCST (in the ORDERDTL case, for example): Chapter 3. Referential integrity 65 SELECT * FROM SYSREFCST WHERE CONSTRAINT_NAME = 'ORDERHDRNUM' AND CONSTRAINT_SCHEMA = 'ORDENTL' This statement returns the information shown in Figure 3-28. Figure 3-28 Referential constraint information To determine the complete definition of a referential constraint through catalog views, you need to perform a join:  From SYSREFCST, you can retrieve the name of the unique or primary key constraint identifying the parent key.  By using the name of the constraint, SYSCST provides the name and library of the corresponding parent file and the type of the constraint itself (primary key or unique constraint).  SYSCSTCOL gives the parent key (unique or primary key) fields. These actions can be expressed through the following query: SELECT C.UNIQUE_CONSTRAINT_SCHEMA , C.UNIQUE_CONSTRAINT_NAME , A.CONSTRAINT_TYPE , A.TABLE_SCHEMA , A.TABLE_NAME , C.UPDATE_RULE , C.DELETE_RULE , B.COLUMN_NAME FROM SYSCST A, SYSCSTCOL B, SYSREFCST C WHERE C.CONSTRAINT_SCHEMA = 'ORDENTL' AND C.CONSTRAINT_NAME = 'ORDERHDRNUM' AND B.CONSTRAINT_SCHEMA = C.UNIQUE_CONSTRAINT_SCHEMA AND B.CONSTRAINT_NAME = C.UNIQUE_CONSTRAINT_NAME AND A.CONSTRAINT_SCHEMA = C.UNIQUE_CONSTRAINT_SCHEMA AND A.CONSTRAINT_NAME = C.UNIQUE_CONSTRAINT_NAME GROUP BY C.UNIQUE_CONSTRAINT_SCHEMA , C.UNIQUE_CONSTRAINT_NAME , A.CONSTRAINT_TYPE , A.TABLE_SCHEMA , A.TABLE_NAME , C.UPDATE_RULE , C.DELETE_RULE , B.COLUMN_NAME The output of the previous query consists of as many rows as the parent key fields. In the example of the ORDDTL_HORD constraint (see 3.4.3, “Another example: Order Entry scenario” on page 32), the query returns the output shown in Figure 3-29. Figure 3-29 Parent key information 66 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 67 Chapter 4. Check constraint This chapter explains:  DB2 UDB for iSeries check constraints  Defining a check constraint  General considerations  Application impacts of check constraint  Check constraint management  Tips and techniques 4 68 Advanced Functions and Administration on DB2 Universal Database for iSeries 4.1 Introduction One of the main contributions of the SQL-92 standard is the specification of a rich collection of integrity constraints. The constraints in SQL-92 can be classified into three categories:  Domain or table constraints  Referential integrity constraints  General assertions Each of these constraints are explained in the following sections. 4.1.1 Domain or table constraints Table or domain constraints in SQL-92 are used to enforce restrictions on the data allowed in particular columns of particular tables. Any column in a table may be declared as NOT NULL. This indicates that null values are not permissible for that column. In addition, a set of one or more columns may be declared as UNIQUE. This indicates that two rows may not have the same values for certain columns, which are those that form the key for the table. Each table also can have, at most, one designated PRIMARY KEY consisting of a set of one or more columns. Primary keys must be both unique and not null. The permissible values of a column may also be restricted by means of a CHECK constraint. A CHECK clause specifies a condition that involves the column whose values are restricted. Semantically, a CHECK constraint is valid if the condition evaluates to true or unknown for every row in the table. 4.1.2 Referential integrity constraints A referential integrity constraint involves two tables called the parent table and the dependent table. Intuitively, every row in the referencing table must be a “child” of some row in the referenced table. Referential integrity disallows “orphans” that are created by insertions (of child rows), updates (of child or parent rows), or deletions (of parent rows). Referential integrity can be violated by insertions or updates to the referencing table or by updates or deletions to the referenced table. 4.1.3 Assertions Assertions in SQL-92 constraints provide the ability for expressing general constraints that may involve multiple tables. As in CHECK constraints, the condition that is evaluated can be an arbitrary SQL predicate. The assertion is satisfied if the condition evaluates to true or unknown. This chapter describes how DB2 UDB for iSeries supports the CHECK constraint that is part of the table constraints of SQL-92. Note: New function was added in V4R2M0 to allow a primary key constraint to be defined where one or more columns in the key allow NULL values. When this condition is detected, a check constraint is implicitly added to the file to ensure that the column will not contain NULL values. This means that this check constraint will prevent any NULL values from being inserted into columns defined for the primary key. Chapter 4. Check constraint 69 4.2 DB2 UDB for iSeries check constraints Check constraints in DB2 UDB for iSeries let you ensure that users authorized to change a column's value use only values that are valid for that column. It ensures that the value being entered in a column of a table belongs to the set of valid values defined for that field. For example, you may specify that the “legal” values for an employee evaluation field defined as an integer might be 2, 3, 4, or 5. Without the check constraint, users can enter any integer value into such a field. To ensure that the actual value entered is 2, 3, 4, or 5, you must use a trigger or code the rule in your application program. A check constraint increases the data integrity because the constraints are validated against every interface (RPG, Data File Utility, ODBC client programs, Interactive SQL, etc.) that updates or inserts database records. The operating system enforces the rules, not the application program. For this reason, there is no way to bypass any control, and the integrity is assured. The programmer no longer has to add this verification code to every application program that updates or inserts a database record. A check constraint is associated with a file and contains check conditions that are enforced against every row in the file. Whenever a row is inserted or updated, the database manager evaluates the check condition against the new or changed row to guarantee that all new field values are valid. If invalid values are found, the database manager rejects the insert or update operation. Here are some examples:  Range checking: The field value must be between 1 and 50  Domain or value checking: The field can be one of the following values: 1, 3, 5, 7, or 9  Field comparisons: total_sales < credit_limit Remember that the check constraint is valid if the condition evaluates to true or unknown. Some of the current alternatives to the check constraint are to:  Code the constraints in the application programs. This may give more flexibility in coding the business rules, but the rules are not enforced in all of the iSeries interfaces (for example, DFU or ODBC client programs).  Use the DDS keywords (COMP, RANGE, VALUES) in the display and logical files. The problem with this approach is that the rules are only enforced in green-screen applications.  Use before triggers. In this case, the rule is enforced on all interfaces, but it is not a part of a database table. It is not a declarative approach. There are some obvious advantages for using the CHECK constraint option in the iSeries server:  There is much less coding to do if the business rules are defined only once in the database.  The administration is much easier because the business rules become part of the database.  The data integrity of the database is improved because the rules are enforced on all interfaces.  Since the database manager is performing the validation, the enforcement is more efficient than the application level enforcement. 70 Advanced Functions and Administration on DB2 Universal Database for iSeries 4.3 Defining a check constraint This section discusses the interfaces and commands that you can use to add a check constraint. We refer to the native interface and the SQL interface. Let's start with the native CL command ADDPFCST. In the following example, we define a check constraint in the CUSTOMER file, where the customer_total (CUSTOT) cannot be greater than the customer_credit_limit (CUSCRD). Enter the ADDPFCST command, and press F4. The display shown in Figure 4-1 appears. Figure 4-1 Prompt for the ADDPFCST command Note that the type of constraint is *CHKCST. The name of the constraint must be unique in the library where it is being created. The display shown in Figure 4-2 prompts you for the check condition. Figure 4-2 Check condition for a check constraint The condition clause of a check constraint can be up to 2000 bytes long. Add PF Constraint (ADDPFCST) Type choices, press Enter. File . . . . . . . . . . . . . . > CUSTOMER Name Library . . . . . . . . . . . > ORDAPPLIB Name, *LIBL, *CURLIB Constraint type . . . . . . . . > *CHKCST *REFCST, *UNQCST, *PRIK Constraint name . . . . . . . . CUSCRD_LIMIT_CUSTOT F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Add PF Constraint (ADDPFCST) Type choices, press Enter. Check constraint . . . . . . . . > 'CUSTOT <= CUSCRD' F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Chapter 4. Check constraint 71 When you add a constraint to an existing file, the existing records must not violate the constraint condition. If the system finds records violating the constraint, a diagnostic message is issued for the first 25 rows that failed the check, and the constraint is set to the check pending condition. The display in Figure 4-3 shows the diagnostic message issued by the system. Figure 4-3 Detailed message for CPD32D3 You can use the Relative Record Number (RRN) scalar function to identify the record that is violating the constraint. This is accomplished by typing the following command in an Interactive SQL session: SELECT rrn(customer), cusnbr FROM customer Figure 4-4 shows the results of this query. In this case, the record with the customer number equal to 100 violates the constraint since its relative record number happens to be 1. Important: The condition clause of a check constraint is a restricted form of the search-condition of the WHERE and HAVING clauses of the SQL statements. You do not need the DB2 Query Manager and SQL Development Kit for iSeries product to define the condition through the native interface. Additional Message Information Message ID . . . . . . : CPD32D3 Severity . . . . . . . : 20 Message type . . . . . : Diagnostic Date sent . . . . . . : 10/16/01 Time sent . . . . . . : 11:20 Message . . . . : Field values are not valid for check constraint. Cause . . . . . : Check constraint CUSCRD_LIMIT_CUSTOT for file CUSTOMER in library ORDAPPLIB is in check pending. The constraint is in check pending because record 1 in the file has a field value that conflicts with the check constraint expression. If the record number for the file is 0, then the record either cannot be identified or does not apply to the check pending status. Recovery . . . : Use the CHGPFCST command for the file to disable the constraint. Use the DSPCPCST command for the file to display the records that are causing the constraint to be in check pending. Update the file to make sure each field value does not conflict with the check constraint expression. Press Enter to continue. F3=Exit F6=Print F9=Display message details F10=Display messages in job log F12=Cancel F21=Select assistance level 72 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 4-4 Query result Now let's see how to create a check constraint using the SQL interface. The SQL interface provides two ways to create a check constraint:  CREATE TABLE statement, which has the constraint clause  ALTER TABLE statement, which allows a table constraint to be added to an existing table with the ADD constraint option At creation time, you can define the check constraint with the SQL CREATE TABLE statement: CREATE TABLE ORDAPPLIB/CUSTOMER (CUSTOMER_NUMBER FOR COLUMN CUSNBR CHAR(5) NOT NULL WITH DEFAULT, CUSTOMER_NAME FOR COLUMN CUSNAM CHAR (20) NOT NULL WITH DEFAULT, ................ ................ ................ CONSTRAINT CUSCRD_LIMIT_CUSTOT CHECK (CUSTOT <= CUSCRD )) The CREATE TABLE statement also allows you to define a check constraint at the column level. The restriction is that a column-level constraint cannot reference any other column. Here is an example: CREATE TABLE ORDAPPLIB/EMPLOYEE ( EmpID# CHAR(2), EmpName CHAR(30), Salary INTEGER CONSTRAINT salarychk CHECK(Salary > 0 AND Salary < 10000), Bonus INTEGER CHECK(Bonus >=0), CONSTRAINT BonusSalaryChk CHECK (bonus<= salary)) Display Data Data width . . . . . . : Position to line . . . . . Shift to column . . . . . . ....+....1....+....2.... RRN ( CUSTOMER ) CUSNBR 1 00100 2 00001 3 00003 5 00009 6 00990 7 00008 8 00500 9 00007 11 55555 12 00400 13 00201 14 00101 15 00102 16 00103 17 00045 More F3=Exit F12=Cancel F19=Left F20=Right F21=Split Chapter 4. Check constraint 73 There is an advantage that SQL CREATE TABLE has over CRTPF when you define constraints. CREATE TABLE allows both the DB file object and associated constraints to be created on a single command. CRTPF is always a two-step process: 1. Use CRTPF to create the DB object. 2. Use ADDPFCST to create your constraints. You can also add check constraints to existing files. This is illustrated in the following example: ALTER TABLE ORDAPPLIB/CUSTOMER ADD CONSTRAINT CUSCRD_LIMIT_CUSTOT CHECK(CUSTOT <= CUSCRD) When you are adding a constraint to an existing file, the existing records must not violate the constraint. If the system finds records violating the constraint, an error is issued, similar to the one shown in Figure 4-5, and the constraint is not created. Figure 4-5 Additional message for SQL0544 In this case, you must correct the records that are violating the constraint before you try to create it again. After the constraint is successfully created, you can see its definition by using the DSPFD ORDAPPLIB/CUSTOMER command. Press the Page Down key to see the display shown in Figure 4-6. Additional Message Information Message ID . . . . . . : SQL0544 Severity . . . . . . . : 30 Message type . . . . . : Diagnostic Message . . : CHECK constraint CUSCRD_LIMIT_CUSTOT cannot be added Cause . . . : Existing data in the table violates the CHECK constraint rule in constraint CUSCRD_LIMIT_CUSTOT. The constraint cannot be added Recovery . : Change the data in the table so that it follows the constraint specified in CUSCRD_LIMIT_CUSTOT. Try the request again. Press Enter to continue. F3=Exit F6=Print F9=Display message details F10=Display messages in job log F12=Cancel F21=Select assistance level Important: The behavior of the ADDPFCST command is different than the ALTER TABLE SQL statement when they encounter violating records during the creation of the check constraint. In the first case, the constraint is added, while in the second case, it is not added. This is also true for the CREATE TABLE statement. The SQL interface complies with the SQL-92 standard, while the native interface follows the traditional OS/400 approach. 74 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 4-6 Spooled file for the DSPFD command You can also see constraints associated with the file by entering the WRKPFCST command. The display shown in Figure 4-7 appears. Figure 4-7 Result of the WRKPFCST command Let’s look at some examples: ALTER TABLE ORDAPPLIB/STOCK 1 ADD CONSTRAINT PRODUCT_PRICE_MIN CHECK(PRODUCT_PRICE > 0 AND PRODUCT_AVAILABLE_QTY >= 0) Display Spooled File File . . . . . : QPDSPFD Page/Line 2/21 Control . . . . . Columns 1 - 78 Find . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.. Constraint Description Primary Key Constraint Constraint . . . . . . . . . . . . . . : CST QSYS_CUSTOMER_ Type . . . . . . . . . . . . . . . . : TYPE *PRIMARY Key . . . . . . . . . . . . . . . . . : KEY CUSNBR Number of fields in key . . . . . . . : 1 Key length . . . . . . . . . . . . . : 5 Check Constraint Constraint . . . . . . . . . . . . . . : CST CUSTOT_LIMITED Type . . . . . . . . . . . . . . . . : TYPE *CHKCST Check pending . . . . . . . . . . . . : NO Constraint state . . . . . . . . . . : STATE ESTABLISHED *ENABLED Check constraint expression . . . . . . : CHKCST CUSTOT <= CUSCRD F3=Exit F12=Cancel F19=Left F20=Right F24=More keys Work with Physical File Constraints Type options, press Enter. 2=Change 4=Remove 6=Display records in check pending Opt Constraint File Library Type State QSYS_CUSTO > CUSTOMER ORDAPPLIB *PRIKEY CUSTOT_LIM > CUSTOMER ORDAPPLIB *CHKCST EST/ENB Parameters for options 2, 4, 6 or command ===> F3=Exit F4=Prompt F5=Refresh F9=Retrieve F12=Cancel F15=Sort F16=Repeat position to F17=Position to F22=Display constraint name Chapter 4. Check constraint 75 ALTER TABLE ORDAPPLIB/CUSTOMER 2 ADD CONSTRAINT CUSTOMER_TYPE CHECK(CUSTYP IN ('01', '02', '03, 'O4','05, '08', '10')) ALTER TABLE ORDAPPLIB/EMPLOYEE 3 ADD CONSTRAINT SALARY_RANGE CHECK(EMPSAL BETWEEN 1000 AND 300000) ALTER TABLE ORDAPPLIB/EMPLOYEE_TRANSAC 4 ADD CONSTRAINT HOURS_LABORED CHECK(ORDINARY_HOURS + EXTRA_HOURS < 168) 4.4 General considerations This section highlights some considerations that you must take into account when you define CHECK constraints. Let's start with the condition clause of the check constraint. The condition clause of a check constraint can contain any expression or functions allowed on an SQL WHERE clause with the following exceptions:  You cannot reference columns of a different table.  You cannot reference rows of the same table, which means you cannot use the following column functions: – SUM – AVERAGE – MIN – MAX – COUNT  Subqueries are not allowed.  Host variables are not allowed.  Parameter markers are not allowed.  The following special registers cannot be used: – CURRENT TIMEZONE – CURRENT SERVER – USER The condition clause of a check constraint can reference more than one column of the same record of the file. Explanation: 1 This check constraint in the STOCK file checks that every price of a product has a price greater than 0 and, at the same time, that the quantity available of a product is greater than or equal to 0. 2 This check constraint in the CUSTOMER file checks that each customer is associated with one of the enumerated types. 3 This check constraint in the EMPLOYEE file checks that the salary of an employee is in the range of $1,000 and $300,000. 4 This check constraint in the EMPLOYEE_TRANSAC file checks that an employee cannot work more than 168 hours in a week. Note the calculations involving two fields of the same row. 76 Advanced Functions and Administration on DB2 Universal Database for iSeries DB2 UDB for iSeries does not prevent conflicting constraints from being defined. Suppose you just created the CUSTOMER file and then you define the following two CHECK constraints before you enter the first record: ALTER TABLE ORDAPPLIB/CUSTOMER ADD CONSTRAINT IMPAIR_TYPE CHECK(CUSTYP IN ('01', '03', '05, 'O7','09')) ALTER TABLE ORDAPPLIB/CUSTOMER ADD CONSTRAINT PAIR_TYPE CHECK(CUSTYP IN ('02', '04', '06, 'O8','10')) In the preceding example, the two constraints that are defined prevent the insertion of any record to the CUSTOMER file. If one check condition is valid, the other one is not valid. Let's try to insert the following record into the CUSTOMER file: INSERT INTO ORDAPPLIB/CUSTOMER (CUSNBR, CUSTYP) VALUES('00001', '01') The message shown in Figure 4-8 is displayed. Figure 4-8 Additional message for SQL0545 If we change the CUSTYP value to “02”, the other constraint is violated. Other considerations of which you should be aware are:  The constraint name has to be unique across all constraint types that exist in the file's library.  A table or file has a limit of 300 combined constraints (referential constraints, primary, unique, and check constraints).  Only single member files are supported. Additional Message Information Message ID . . . . . . : SQL0545 Severity . . . . . . . : 30 Message type . . . . . : Diagnostic Message . . . . : INSERT or UPDATE not allowed by CHECK constraint. Cause . . . . . : The value being inserted or updated does not meet the criteria of CHECK constraint PAIR_TYPE. The operation is not allowed. Recovery . . . : Change the values being inserted or updated so that CHECK constraint is met. Otherwise, drop the CHECK constraint PAIR_TYPE Press Enter to continue. F3=Exit F6=Print F9=Display message details F10=Display messages in job log F12=Cancel F21=Select assistance leve Important: It is the developer’s responsibility to ensure that check constraints are not mutually exclusive. Chapter 4. Check constraint 77  When you add a check constraint, DB2 UDB for iSeries makes an exclusive lock on the table for the verification of the condition clause. 4.5 Check constraint integration into applications Before check constraint support was implemented in DB2 UDB for iSeries, check constraint validations had to be performed by the application program. Now you can let DB2 UDB for iSeries ensure your data integrity both through the referential integrity and check constraint definitions. Using the check constraint definitions may improve your application's performance. The domain checks are much more efficient and quicker when performed at the operating system level rather than by an application code. However, once a programmer has defined check constraints and referential constraints to the DBMS, the existing integrity checks should be removed from the application program. Otherwise, the application performance will degrade since the same checking is being performed twice (at the application level and at the system level). 4.5.1 Check constraint I/O messages The enforcement of the check constraint definitions is done when:  An insert is being done to the table with check constraints.  An update is being done to the table with check constraints.  A delete is being done on a parent table that has a referential integrity constraint defined with their dependent tables and a SET DEFAULT or SET NULL is specified. A new message has been defined to handle the error occurring during a check constraint enforcement. Instead of coding domain checks in the application programs, coding is needed for handling check constraint error conditions. The text of the message is shown in Figure 4-9. Note: The verification for adding a check constraint to large files can take some time. In our test environment, it took about 10 minutes to verify a 5 million row table on a 50S machine. 78 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 4-9 Detailed message for CPF502F 4.5.2 Check constraint application messages To handle these messages in SQL procedures or in embedded SQL statements, new SQL codes have been provided for this purpose. In Figure 4-10, you can see the new messages added to the SQL run time. Figure 4-10 SQL messages for the check constraint Display Formatted Message Text System: SY Message ID . . . . . . . . . : CPF502F Message file . . . . . . . . : QCPFMSG Library . . . . . . . . . : QSYS Message . . . . : Check constraint violation on member CUSTOMER. Cause . . . . . : The operation being performed on member CUSTOMER file CUSTOMER in library ORDAPPLIB failed. Constraint PAIR_TYPE prevents record number 2 from being inserted or updated because the field value conflicts with the check constraint. If the record number is zero, then the error occurred on a insert operation. The reason code is 01. The reason codes and their meanings are as follows: 01 - Violation due to insert or update operation. 02 - Violation caused by a referential constraint. Recovery . . . : Either specify a different file, change the file, or change the program. Then try your request again. Possible choices for replying to message . . . . . . . . . . . . . . . : C -- The request is canceled. Press Enter to continue. F3=Exit F11=Display unformatted message text F12=Cancel Display Message Descriptions System: SYS Message file: QSQLMSG Library: QSYS Position to . . . . . . . Message ID Type options, press Enter. 5=Display details 6=Print Op Message ID Severity Message Text SQL0543 30 Constraint &1 conflicts with SET NULL or SET Default SQL0544 30 CHECK constraint &1 cannot be added. 5 SQL0545 30 INSERT or UPDATE not allowed by CHECK constraint. SQL0546 30 CHECK condition of constraint &1 not valid. SQL0551 30 Not authorized to object &1 in &2 type *&3. SQL0552 30 Not authorized to &1. SQL0557 30 Privilege not valid for table or view &1 in &2 SQL0569 10 Not all requested privileges revoked from object SQL0570 10 Not all requested privileges to object &1 in &2 SQL0573 30 Table &1 in &2 does not have a matching parent F3=Exit F5=Refresh F12=Cancel Chapter 4. Check constraint 79 The SQL message, SQL0545, is the most important for the application programmers. The detailed description is shown in Figure 4-11. Figure 4-11 Detailed message for SQL0545 SQL0545 has an SQLSTATE of “23513”, which is useful for a condition or handler declarations in SQL procedures. Refer to Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503, for a detailed discussion on SQL procedures. Now let's see how the errors are reported in ILE programs:  In ILE RPG, you can check the new status 1022, which handles the CPF502F message.  In ILE COBOL, the file status “9W” handles the check constraint violation.  ILE C maps these messages to the existing error codes.  OPM programs map these messages to the existing generic error codes. 4.6 Check constraint management This section discusses the management considerations of the check constraints. The managing part of a check constraint is the same as a referential integrity constraint. The commands to manage the constraints are exactly the same as for referential integrity. These commands are:  Change Physical File Constraint (CHGPFCST)  Display Check Pending Constraint (DSPCPCST)  Work with Physical File Constraint (WRKPFCST)  Edit Check Pending Constraint (EDTCPCST)  Remove Physical File Constraint (RMVPFCST) For a complete description of these commands, refer to 3.8, “Referential integrity constraint management” on page 52. Display Formatted Message Text System: SYSTEM1 Message ID . . . . . . . . . : SQL0545 Message file . . . . . . . . : QSQLMSG Library . . . . . . . . . : QSYS Message . . . . : INSERT or UPDATE not allowed by CHECK constraint. Cause . . . . . : The value being inserted or updated does not meet the criteria of CHECK constraint &1. The operation is not allowed. Recovery . . . : Change the values being inserted or updated so that the CHECK constraint is met. Otherwise, drop the CHECK constraint &1. Press Enter to continue. F3=Exit F11=Display unformatted message text F12=Cancel 80 Advanced Functions and Administration on DB2 Universal Database for iSeries 4.6.1 Check constraint states A check constraint is the same as a referential constraint in terms of its possible states. The four states of a check constraint are:  Defined and enabled  Defined and disabled  Established and enabled  Established and disabled Each term is explained in the following list: Defined The constraint definition has been added to the file, but not all the pieces of the file are there for enforcement. For example, the file's member does not exist. Established The constraint definition has been added to the file, and all the pieces of the file are there for enforcement. Enabled The check constraint is enforced if the constraint is also established. If the constraint is defined, the file member does not exist for enforcement. Disabled The constraint definition is not enforced regardless of whether the constraint is established or defined. Use the WRKPFCST command to see the constraints defined for the CUSTOMER file. Figure 4-12 Physical file constraints There are two check constraint definitions for this file. One is established and enabled, and the other one is established and disabled. At the same time, the constraint that is disabled has some records in a check pending condition. If you want to see the records that are causing the check condition, type option 6 next to the constraint with the check pending status. Figure 4-13 shows the job log of the job in which the ADDPFCST command was executed and caused the check pending condition. Work with Physical File Constraints Type options, press Enter. 2=Change 4=Remove 6=Display records in check pending Check Opt Constraint File Library Type State Pending QSYS_CUSTO > CUSTOMER ORDAPPLIB *PRIKEY CUSTOT_LIM > CUSTOMER ORDAPPLIB *CHKCST EST/ENB NO CUSCRD_VS_ > CUSTOMER ORDAPPLIB *CHKCST EST/DSB YES Parameters for options 2, 4, 6 or command ===> F3=Exit F4=Prompt F5=Refresh F9=Retrieve F12=Cancel F15=Sort F16=Repeat position to F17=Position to F22=Display constraint name Chapter 4. Check constraint 81 Figure 4-13 Check constraint messages 4.6.2 Save and restore considerations When a table is saved, all of its check constraints are saved. For restores, if the check constraints are on the save media and the saved file does not exist, the check constraint is added to the file. However, if the file exists in the system, any check constraint on the SAVE media is ignored. If you are restoring data and the data results in a check constraint violation, the data is restored and the constraint goes into a check pending condition. The state of the constraint stays the same. When you run the CRTDUPOBJ command from a file with check constraints, the check constraints are propagated from the original file to the new file. Both the constraint state and the check pending status are replicated from the original to the destination file. Display All Messages System: RC Job . . : QPADEV0004 User . . : HERNANDO Number . . . : 00 CHECK constraint CUSTOT_LIMIT_CUSCRD cannot be added. 3 > ADDPFCST FILE(ORDAPPLIB/CUSTOMER) TYPE(*CHKCST) CST(CUSCRD_VS_CUSTOT ST('CUSCRD < CUSTOT') Field values are not valid for check constraint. 1 Field values are not valid for check constraint. Field values are not valid for check constraint. Field values are not valid for check constraint. Field values are not valid for check constraint. Field values are not valid for check constraint. Field values are not valid for check constraint. Field values are not valid for check constraint. Constraint is in check pending. 2 Constraint was added. 3 1 constraint(s) added to file CUSTOMER but constraint(s) in error. Press Enter to continue. F3=Exit F5=Refresh F12=Cancel F17=Top F18=Bottom Notes: 1 The condition is checked for every single record in the file. In this case, there are several records that do not meet the condition. 2 Since there are records that violate the condition, a check pending status is set for the constraint. 3 Since the native interface has been used, the constraint is added to the file. 82 Advanced Functions and Administration on DB2 Universal Database for iSeries 4.7 Tips and techniques We complete this chapter with some tips for those of you who are responsible for moving the business rules to the database. This applies to both check constraints and referential integrity constraints. Start identifying and isolating the application code that is responsible for the referential integrity and check constraint checks. These pieces of code that are duplicated in several different programs should be rewritten in ILE procedures that can be bound and reused by several application programs. These procedures can then be defined as trigger programs to make the concerning checks. At the same time, start to clean up your data. This can be done by queries that highlight records that are violating the constraints, or you can natively define the constraints and use the WRKPFCST command to see the records that are in check pending. At this step, carefully schedule the time for these queries so you do not interrupt the normal operation of the business. Once your data is clean, it is time to enable all the constraints and remove the trigger programs that are no longer needed. Before you decide on your check constraint naming convention, consider this tip: It might make it easier to turn system constraint errors into meaningful user feedback by defining your constraint name to be the message number that you really want displayed to the end user. When you define the check constraints, a question may arise: Is it better to have one large check constraint or multiple check constraints? When there is more than one check constraint for a file, the system implicitly ANDs the result of each constraint during the enforcement phase. From the performance point of view, one large constraint performs slightly better than multiple check constraints because the implicit AND processing is eliminated. On the other hand, it is easier to manage and identify multiple but simpler check constraints. It is easier to identify problems when the system detects violations of the constraint. It is up to the application programmer or the DBA to decide which approach is better. Let's look at an example: ALTER TABLE ORDAPPLIB/STOCK ADD CONSTRAINT PRODUCT_PRICE_MIN CHECK(PRODUCT_PRICE > 0) ALTER TABLE ORDAPPLIB/STOCK ADD CONSTRAINT PRODUCT_AVAIL_MIN CHECK(PRODUCT_AVAILABLE_QTY >= 0) ALTER TABLE ORDAPPLIB/STOCK ADD CONSTRAINT PRODUCT_MIN_STOCK CHECK(PRODUCT_MIN_STOCK_QTY >= 0) These three check constraints can be combined in one constraint as shown in the following example: ALTER TABLE ORDAPPLIB/STOCK ADD CONSTRAINT STOCK_CONSTRAINTS CHECK(PRODUCT_PRICE > 0 AND PRODUCT_AVAILABLE_QTY >= 0 AND PRODUCT_MIN_STOCK_QTY >= 0) Note: Keep in mind that, if you have multiple check constraints that are violated on an insert operation, only a single error message is returned. The system stops enforcement and signals the error on the first check constraint violation it finds. © Copyright IBM Corp. 1994, 1997, 2000, 2001 83 Chapter 5. DRDA and two-phase commitment control This chapter presents:  DRDA evolution from DRDA-1 to DRDA-2  DRDA-2 connection management  Two-phase commitment control  SQL support for DRDA-2  Coexistence between DRDA-1 and DRDA-2  Recovering from failures  Application design considerations  A DRDA-2 program example  DRDA over TCP/IP  DB2 Connect setup over TCP/IP 5 84 Advanced Functions and Administration on DB2 Universal Database for iSeries 5.1 Introduction to DRDA The Distributed Relational Database Architecture (DRDA) represents IBM’s proposal in the arena of distributed database access. This architecture defines the rules, the protocols, and the semantics for writing programs implementing distributed data access. All the platforms participating in this architecture must comply with these rules and definitions. This chapter does not discuss, in detail, every component of DRDA. The purpose is to provide you with a brief outlook on DRDA evolution and to describe the implementation of DRDA in the DB2 UDB for iSeries environment. 5.1.1 DRDA architecture Distributed Relational Database Architecture allows you to access data in a distributed relational database environment by using SQL statements in your applications. The architecture has been designed to allow distributed data access for systems in like and unlike operating environments. This means that your applications can access data residing on homogeneous or heterogeneous platforms. DRDA is based on these IBM and non-IBM architectures:  SNA Logical Unit Type 6.2 (LU 6.2)  TCP/IP Socket Interface  Distributed Data Management (DDM) architecture  Formatted Data Object Content Architecture (FD:OCA)  Character Data Representation Architecture (CDRA) On the iSeries server, DRDA is part of DB2 UDB for iSeries, which is part of the OS/400 operating system. 5.1.2 SQL as a common DRDA database access language SQL has become the most common data access language for relational databases in the industry. SQL was chosen as part of DRDA because of its high degree of standardization and portability. In a distributed environment, where you want to access data at remote locations, the SQL requests are routed to the remote systems and they are executed remotely. Prior to sending the remote SQL request, a DRDA application must establish a connection with the remote relational database where the data is located. This is the purpose of the CONNECT SQL statement provided by DRDA. 5.1.3 Application requester and application server In a distributed relational database environment, the system running the application and sending the SQL requests across the network is called an application requester (AR). Any remote system that executes SQL requests coming from the application requester is also known as an application server (AS). Some platforms can participate in a distributed database environment as both an application requester and an application server. The diagram in Figure 5-1 shows the current application requester and application server capabilities of different database management systems. Chapter 5. DRDA and two-phase commitment control 85 Figure 5-1 Current support for application requester (AR) and application server (AS) Note: Currently, the DB2 Universal Database and DB2 Connect offer different levels of DRDA implementation depending on the OS platform. The support level equivalent to that of OS/400 is available for AIX, Windows NT, HP-UX, and OS/2. Consult the appropriate documentation for the latest additions. 5.1.4 Unit of work Unit of work (UoW), unit of recovery (UR), or logical transaction are different ways to refer to the same concept. The DRDA terminology prefers the term unit of work. Unit of work refers to a sequence of database requests that carry out a particular task, such as in a banking application when you transfer money from your savings account to your checking account. This task has its logical independence and should be treated “atomically”, which means that either all its components are executed or none of them are. You do not want your savings balance to be updated without your checking balance being updated too. A unit of work is generally terminated by a commit operation if the entire task completes successfully. For more information about UoW, refer to Distributed Database Programming, SC41-5702. DRDA defines the following levels of service regarding UoW:  Level 0, Remote Request (RR): – One request within one UoW to one DBMS. Remember that DB2 UDB for iSeries provides one DBMS. DB2 for OS/390 or DB2 Universal Database can provide multiple DBMSs on the same system. – Remote request was available before DRDA, thanks to DDM support.  Level 1, Remote Unit of Work (RUW): – One or more SQL requests within one UoW to a single DBMS. – Switching to a different location is possible, but a new UoW must be started and the previous one must be completed. – Remote Unit of Work is supported by both the SNA and TCP/IP implementations of DRDA.  Level 2, Distributed Unit of Work (DUW): – Many SQL requests within one UoW to several DBMS. – Two-phase commit is required. – A single request may reference objects residing on the same DBMS. DB2 for MVS DB2 for MVS DB2 for VM DB2 for VM DB2 for iSeries DB2 for iSeries DB2 UDB DB2 UBD Non - IBM Non - IBM AR (Local DB) AS (Remote DBs) 86 Advanced Functions and Administration on DB2 Universal Database for iSeries – The Distributed Unit of Work is currently supported only by the SNA implementation of DRDA.  Level 3, Distributed Request (DR): – In addition to the services provided by Distributed UoW, DR allows a single SQL request to include references to multiple DBMSs, such as joining tables stored at different locations. – This is an architected level and will be available in the future. The diagram in Figure 5-2 may be helpful in understanding the levels of DRDA. Figure 5-2 Architected service levels of DRDA 5.1.5 Openness Many non-IBM relational database providers (for example, Informix, Oracle Sybase, XDB Systems, and others) implement different levels of DRDA support in their products. DRDA offers the ability to access and exchange data in like and unlike system environments, therefore, contributing to the openness of IBM platforms in regard to interoperability. 5.2 Comparing DRDA-1 and DRDA-2 The difference between DRDA-1 and DRDA-2 from an application point of view is illustrated in Figure 5-3. DRDA-2 introduces:  Two-phase commit protocol to keep multiple databases in synchronization  Synchronization Point Manager (SPM) to manage the two-phase commit  A new connection management  New SQL statements to manage multiple connections DRDA-1 cannot maintain multiple connections in one unit of work. To connect to a different application server, the application must be in a connectable state, which is achieved by ending the unit of work with a COMMIT or ROLLBACK statement. Distributed Request (DR) . L . E . V . E . L . 3 . . . . . . future SQL Request SQL Request SQL Request SQL Request . L . E . V . E . L . 2 . . . . V3R1 Zurich Rochester Seoul . L . E . V . E . L . 1 . . V2R1 .1 Distributed Unit Of Work (DUW) Remote Unit Of Work (RUW) Remote Request (RR) . . . . . . Chapter 5. DRDA and two-phase commitment control 87 DRDA-2 can connect to multiple servers without losing the existing connections. A single unit of work can span multiple servers. Keep in mind that a single SQL statement still cannot address more than one server at a time. For example, it is still not possible to join two files residing on different systems. Figure 5-3 Remote Unit of Work (DRDA-1) versus Distributed Unit of Work (DRDA-2) Figure 5-3 shows three units of work. The arrows pointing to the left indicate the only possible way to access data in a DRDA-1 application. The arrows pointing to the right show the new flexibility of a DRDA-2 application accessing multiple systems in the same UoW. Let's consider, for example, the Rochester system on the right-hand side. It issues three requests: Request2, Request4, and Request6. Each of these requests belong to a different unit of work. The Rochester system on the left-hand side also issues three requests (Request 3, Request 4, and Request 5), each targeting a different application server within one UoW. 5.3 DRDA-2 connection management Connection management refers to the set of mechanisms by which you can direct your database requests in a distributed database network. DRDA-2 has enhanced connection management, which allows an application program to keep alive the existing connections and perform I/O operations on multiple relational databases within the same unit of work. Currently, this architected level of DRDA is available only over SNA. It will also be available in DRDA- 1 DRDA-2 iSeries iSeries iSeries iSeries iSeries iSeries Application Servers Application Requester Application Servers Seoul Rochester Zurich or any other "DRDA" system UOW 1 Request 1 Request 2 UOW 2 Request 3 Request 4 Request 5 UOW 3 Request 6 Request 7 Zurich Rochester Seoul 88 Advanced Functions and Administration on DB2 Universal Database for iSeries future releases over the TCP/IP implementation of DRDA. There are also some changes to the way the CONNECT statement behaves in DRDA-2 if you compare it with the DRDA-1 CONNECT behavior. In DRDA-1, the current connection is destroyed when a new CONNECT statement is issued. In DRDA-2, another CONNECT statement does not destroy the existing connections. A new one is created instead and becomes the current connection. Also, if you issue a CONNECT statement toward an existing connection, you receive negative SQLCODE if your programs use DRDA-2 connection management and the current connection does not change. In a DRDA-1 program, this operation is legitimate and does not return an error. 5.3.1 Connection management methods As we previously indicated, DB2 UDB for iSeries allows you to use both the DRDA-1 connection management method and the DRDA-2 connection manager. When you create your SQL program or module, you specify which connection method you want to use:  DRDA-1 connection management is set on the CRTSQLxxx command by specifying RDBCNNMTH(*RUW).  DRDA-2 connection management is specified on the CRTSQLnnn command by specifying RDBCNNMTH(*DUW), which is also the default for the creation commands. Because the connection method changes the semantics of the CONNECT statement, you must be aware of this parameter when you are recompiling existing applications, because they can behave in a different way if compiled with the *DUW option. For more details, see 5.6, “DRDA-1 and DRDA-2 coexistence” on page 95. DB2 UDB for iSeries also allows you to specify that an implicit connection must take place when the program is started. This is the purpose of the RDB parameter on the precompiler commands. This implicit CONNECT will be of the type specified in the RDBCNNMTH parameter. If an RDB name is specified, the connection to this remote database is established automatically at program start time. Therefore, the first re-connection statement in the program has to be SET CONNECTION. If CONNECT is initiated to this application server by the program logic, the SQL0842 message (SQLCODE = -842 - “Connection to relational database xxx already exists”) is sent to the application. Check for this SQLCODE explicitly after every CONNECT statement. In general, this SQLCODE can be ignored by the application. Remember that a CONNECT statement followed by SQLCODE = -842 does not change the current connection. 5.3.2 Connection states DRDA-2 introduces new connection states. A connection may be either held or released and current or dormant. For clarification, see Figure 5-4. Chapter 5. DRDA and two-phase commitment control 89 Figure 5-4 Connection states Figure 5-4 shows a general picture of the architecture. The application goes into an UNCONNECTED state when the current connection is destroyed. The application ends in an unconnected state if all the connections are released and a commit is performed. The CONNECT and RELEASE statements allow the application to change a connection state from held to released:  Released state: Means that a disconnect will occur at the next successful commit operation (a rollback has no affect on connections). Therefore, a released state can be thought of as a pending disconnect.  Held state: Means that a connection will not be lost at the next commit operation. A connection in the released state cannot be put back into a held state. This means that a connection may remain in a released state across unit of work boundaries if a ROLLBACK is issued. Regardless of whether a connection is in a held or released state, a connection can also be in a current or dormant state:  Current state: Means that the connection is the one used for SQL statements that are executed.  Dormant state: Means that the connection is suspended. While in this state, no SQL statements can use this connection. Nevertheless, SQL statements can always be executed against the current connection. If you want a dormant connection to become the current connection, use the SET CONNECTION SQL statement. The existing current connection will become dormant. In fact, there can be only one connection in the current state at a time. All other connections are in a dormant state. You cannot use the CONNECT statement to make a dormant connection current in a DB2 UDB for iSeries application. The semantic of the CONNECT statement is different in DB2 UDB for iSeries and DB2 for OS/390, where a CONNECT to an existing connection equates to a SET CONNECTION statement. Current H eld Released Dormant Commit Commit Commit Set Connection X Set Connection Y Set Connection Y Set C onnect oi n X Connect to Y Connect to Y Connect to X Release X Release X Unconnected Current Released Current Held Dormant Released Dormant Held Commit 90 Advanced Functions and Administration on DB2 Universal Database for iSeries When a connection goes to the dormant state, all the open cursors, locks, and prepared statements are preserved. When this connection becomes current again in the same unit of work, all locks, cursors, and prepared statements are restored to their previous values. In a network where systems at different levels coexist, you may have connections to DRDA-2 servers and to DRDA-1 servers at the same time. Connection to DRDA-1 servers must be dropped by using the DISCONNECT statement. Once disconnected, an application must connect to the database again before it can direct SQL statements to it. 5.4 Two-phase commitment control Synchronizing multiple databases requires additional effort compared to the process of keeping data consistent on a single system. Because multiple physical locations are involved, the synchronization process is split into two phases to ensure data consistency across multiple locations. The database managers involved in the distributed unit of work must make sure that either all of them commit their changes or roll all the changes back consistently. The protocol by which multiple database managers can keep their data in sync is called two-phase commitment control. In an application using two-phase commit, the COMMIT statement generates a rather complex sequence of operations that allows the various agents in the network to keep their data in a consistent state. Also, a two-phase commit protects your applications against network or system failures that may occur during the transaction. In these cases, the database managers involved in the unit of work automatically roll back their changes. As mentioned already, current DRDA-2 implementation is based on LU 6.2 architecture. When an LU 6.2 conversation supports a two-phase commitment control data flow, we say that it is a protected conversation. Some new verbs have been added to the APPC protocol to support protected conversations. You have direct access to this support on the iSeries server by using ICF files or CPI-C functions in your applications. Any LU 6.2 conversation not capable of a two-phase commitment control flow is called unprotected conversation. DRDA-1 supports only unprotected conversations. 5.4.1 Synchronization Point Manager (SPM) To control the two-phase commit flow, DB2 UDB for iSeries implements a component called Synchronization Point Manager. The SPM also controls rollback among the various protected resources. Either all changes are committed or they are rolled back. With distributed updates, sync point managers on different systems cooperate to ensure that resources reach a consistent state. The example in Figure 5-5 shows the type of information that flows between the application requesters and application servers to commit work on protected conversations. Chapter 5. DRDA and two-phase commitment control 91 Figure 5-5 Technical view of two-phase commit flow Each application requester and each application server have a sync point manager attached. DBMS participating in a Distributed Unit of Work has to cooperate to ensure the synchronization. Phase one consists of this process: 1. The application requester issues a COMMIT. All participating application servers and the application requester must be synchronized. The requester must wait now until it receives the OK from its SPM. 2. The SPM of the application requester sends a prepare for commit request to the SPMs of the servers. 3. All the SPMs at the server systems initiate the process of logging all the database changes and reaching the sync point. 4. The servers send a completion message to their SPMs. 5. The SPM requests a commit from the application servers SPM. For phase two, proceed with the following actions: 1. Once the SPMs have received the responses and logged them, they return a committed message. 2. The server SPMs send Forget to the application requester SPM and OK to their application servers. Everything has been synchronized now. 3. The application requester receives the OK from its SPM and can continue. Figure 5-6 shows the application view of the two-phase commit flow. Requestor SPM SPM Server 1 2 3 4 5 6 7 8 Phase 1 Phase 2 Sync Point Prepare Take Sync Point Sync point Request Commit Committed Forget OK OK 92 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-6 Application view of two-phase commit flow Note: The more database management systems (DBMS) that are involved in one Distributed Unit of Work, the more messages have to flow. This requires more resources and causes additional communication line traffic. 5.5 DB2 UDB for iSeries SQL support for connection management With DRDA-2, some SQL connection statements were added or have changed. They are listed here in alphabetical order. For more details, refer to Distributed Database Programming, SC41-5702. CONNECT (Type 1) The CONNECT (Type 1) statement connects an activation group within an application process to the identified application server, using the rules for the Remote Unit of Work. The term activation group refers to a substructure of a job that contains the resources necessary to run programs. For more details on activation groups, see ILE Concepts, SC41-5606. Database #1 Coordinator Application requests CC ... COMMIT Coordinator starts CC: Phase 1: Journal disk, keep locks If all participants OK: "End of phase 1" Journal Application continues ... ... Database #2 Prepare Commit Prepare Commit Ready for CC Ready for CC Phase 1: Journal disk, keep locks Phase 2: Commit Journal release locks Phase 2: Commit Journal release locks If all participants OK: "End of phase 2" Journal ( Committment Boundary) Execute Commit Execute Commit Commit done Commit done Chapter 5. DRDA and two-phase commitment control 93 If your application runs multiple activation groups, each connection is private to the activation group that issued it. The connection terminates if the activation group ends. This termination occurs whether the application is using the DRDA-1 or DRDA-2 connection methods. A program compiled with the RDBCNNMTH parameter of the CRTSQLpgm command set to *RUW runs with the CONNECT Type 1 connection method. Consecutive CONNECT statements can be executed successfully because CONNECT does not remove the activation group from the connectable state. A connect to the application server to which the activation group is currently connected is executed similar to any other CONNECT statement. CONNECT cannot execute successfully when it is preceded by any SQL statement other than CONNECT, COMMIT, or ROLLBACK. To avoid an error, execute a COMMIT or ROLLBACK operation before the CONNECT. CONNECT (Type 2) The CONNECT (Type 2) statement connects an activation group within an application process to the identified application server using the rules for the Distributed Unit of Work. This server is then the current server for the process. A program runs with this connection management method if it is compiled with the RDBCNNMTH parameter of the CRTSQLpgm command set to *DUW. DISCONNECT The DISCONNECT statement destroys one or more connections for unprotected connections (connections without two-phase commit). See 5.4, “Two-phase commitment control” on page 90. You cannot issue a DISCONNECT statement toward a protected conversation or toward any connection that sent SQL statements during the current unit of work. RELEASE The RELEASE statement places one or more connections in the released state. This statement is allowed only for protected conversations. If the statement is successful, each identified connection is placed in the released state and is, therefore, destroyed during the execution of the next COMMIT operation. Keep in mind that a ROLLBACK will not end the connection. If the current connection is in the released state when a commit operation is executed, destroying that connection places the activation group in the unconnected state. In this case, the next SQL statement to this application server must be CONNECT. If SET CONNECTION is used to this application server, an error (SQL0843, “Connection to relational database xxx does not exist.”) is encountered. SET CONNECTION is only possible to an application server other than the previously released one. Note: CONNECT (Type 2) cannot be issued a second time to the same server as long as the connection is still alive. Use SET CONNECTION to activate another server. Note: Creating and maintaining active connections requires some effort on behalf of the systems involved in the database network. This is why your applications should drop active connections if they are not going to be reused. 94 Advanced Functions and Administration on DB2 Universal Database for iSeries SET CONNECTION SET CONNECTION activates an already connected server so that all SQL statements, from now on, are directed to this server until another SET CONNECTION is issued, a CONNECT to a new server is executed, or the connection is ended. The SET CONNECTION statement brings the state of the connection from dormant to current. After the activation group is reconnected to the server, it finds that its environment is in the same status as when it left this connection. The connection reflects its last use by the activation group with regard to the status of locks, cursors, and prepared statements. 5.5.1 Example of an application flow using DRDA-2 In Figure 5-7, you can find a high-level description of a DRDA-2 application flow. Notice that the environment includes the coexistence of both the DRDA-2 and DRDA-1 systems. Figure 5-7 Application flow example using the new connection management The systems located in Rochester and Seoul support DRDA-2, where the system in Zurich is running at DRDA-1. The AR in Rochester connects to Seoul, does some work, connects back to the local database without COMMIT, and reconnects to Seoul through the new SET CONNECTION statement. Then Rochester connects to Zurich, Seoul is released, and Rochester reconnects to the local database. Before we finish our unit of work, we release and disconnect Zurich. After finishing the unit of work, no more remote connections are active because we released Seoul and disconnected Zurich. Finally, we are connected to our local database, Rochester. CONNECT TO SEOUL SELECT _ _ _ _ CONNECT TO ROCHESTER ENTER_ _ _ _ SET CONNECTION SEOUL UPDATE_ _ _ _ CONNECT TO ZURICH SELECT_ _ _ _ RELEASE SEOUL SET CONNECTION ROCHESTER DELETE COMMIT DISCONNECT ZURICH ROCHESTER DRDA-2 iSeries (AR) ZURICH DRDA-1 iSeries SEOUL DRDA-2 iSeries These iSeries servers could also be any other "DRDA" platform Chapter 5. DRDA and two-phase commitment control 95 5.6 DRDA-1 and DRDA-2 coexistence As Figure 5-7 shows, DRDA-2 and DRDA-1 systems can coexist in the same network, and a DRDA-2 application can access both types of application servers during its execution. Since DRDA-1 application servers do not support a protected conversation, some limitations may apply as to which systems can be accessed in update mode. An application requester determines, at the initial connect time, whether a DRDA-1 application server can be updated. A DRDA-2 application requester connection to a DRDA-1 application server can be used to perform updates when:  The program was compiled with an isolation level other than *NONE.  There are no other connections or they all are DRDA-1 read-only connections. Note: COMMIT(*NONE) on DB2 UDB for iSeries means that no transaction isolation is done and no logs are written. It can only be used in a DB2 UDB for iSeries-like environment. At connect time, the DB2 UDB for iSeries application requester chooses whether a sync point manager is used and, thus, whether the application server can be updated. Depending on this decision, different DRDA and commitment control protocols are used. Table 5-1 shows how the different flows can be mixed together. Table 5-1 Mixing DRDA levels Table 5-1 indicates:  Application requester is at DRDA-1: All application servers use the DRDA-1 flow. This implies a single-phase commit and unprotected conversations.  Application requester is at DRDA-2 supporting a single-phase commit (1PC): When the application server is at DRDA-1, DRDA-1 protocol is used. All others use a DRDA-2 flow with single-phase commit.  Application requester is at DRDA-2 with a two-phase commit: When the application server is at DRDA-1, DRDA-1 flows are used. When the application server is at DRDA-2 with single-phase commit capability, DRDA-2 single-phase commit flow is used. When the application server is at DRDA-2 with two-phase commit capability, DRDA-2 two-phase commit flow is used. In a heterogeneous environment, the protocol used depends on the application requester according to Table 5-1. Application server requesters Application servers DRDA-1 DRDA-2 1PC DRDA-2 2PC DRDA-1 DRDA-1 DRDA-1 DRDA-1 DRDA-2 1PC DRDA-1 DRDA-2 1PC DRDA-2 1PC DRDA-2 2PC DRDA-1 DRDA-2 1PC DRDA-2 2PC Notes: 1PC = Single phase commit 2PC = Two-phase commit 96 Advanced Functions and Administration on DB2 Universal Database for iSeries 5.7 Recovery from failure As we mentioned earlier, in most cases the recovery is totally automatic. When the systems detect a failure, the current transaction is automatically rolled back on all the systems. Still, there is a narrow window in the two-phase commit cycle where a network failure or a system failure may leave the transaction in a pending state because the application requester cannot determine which action to take. This window is located right before the last step of the two-phase commit process, when the application server may already have committed a transaction, but for some reason cannot send the final acknowledgment. When the transaction hangs, all the locks are preserved and the application receives an I/O error. This section describes how the system or the users can recover after a network or a system failure in a two-phase commit environment. 5.7.1 General considerations To control the synchronization over multiple systems, DB2 UDB for iSeries uses a Logical Unit of Work ID. This identifier is the same on all systems involved, whether they are application requesters or application servers with like or unlike platforms. On DB2 UDB for iSeries, the unit of work ID looks similar to the following example: APPNET.ROCHESTER.X'F2DEB3D611CA'.00001 Note: This ID is composed of four parts, where:  APPNET is the APPN Net-ID  ROCHESTER is the application requester system name  X'F2DE.... is related to the job, running a protected conversation  000... relates to the program call within the job On the iSeries server, this identifier should actually be called activation group ID because the number remains the same over the life of an activation group. The program or activation group can start a large number of units of work. Ending the program and calling it again changes the last part of the identification number, which then looks similar to this example: APPNET.ROCHESTER.X'F2DEB3D611CA'.00003 If you start a new job on the iSeries server and run the same application, the identifier will change its third component: APPNET.ROCHESTER.X'F2E1B36111CB'.00001 Note: A new job changes the last two parts of the identification number. 5.7.2 Automatic recovery DB2 UDB for iSeries with DRDA-2 and two-phase commitment control provides a comprehensive recovery mechanism after system, network, or job failures. Automatic recovery was tested with programs from the Order Entry Application example described in Chapter 2, “Using the advanced functions: An Order Entry application” on page 11, particularly with the Insert Order Detail program documented in Appendix A, “Order Entry application: Detailed flow” on page 329. This program (INSDET) calls a stored Chapter 5. DRDA and two-phase commitment control 97 procedure (STORID) on a remote iSeries server. The stored procedure updates a STOCK table on the remote system. Then, the calling program inserts an order detail record in a ORDERDTL table on the local system. After doing this, the Distributed Unit of Work (DUW) is completed. Note: ROCHESTER is the local system (AR). ZURICH is the remote system (AS). In this test scenario, the stored procedure program on the remote system was abruptly terminated, cancelling the job before the database changes on both systems were committed. In this case, the remote system (application server) rolled back the one database change automatically and provided information in the job log. At the application requester, information provided in the program ended, but not before rolling back the local database change, which was a record insert. The rollback operation is needed since the calling program received SQL error return code -918, which corresponds to message SQL0918. The details are shown in Figure 5-8. Figure 5-8 Message SQL0918 The job log of the remote system (ZURICH) reported the following information: ............... CPI9152 Information Target DDM job started by source system. CPI3E01 Information Local relational database accessed by ROCHESTER. CPC1125 Completion Job ../ITSCID06/ROCHESTER was ended by user ITSCID03. CPD83DD Diagnostic Conversation terminated; reason 02. 02 -- The conversation was issued a Deallocate Type (Abend) to force the remote location to roll back. CPF4059 Diagnostic System abnormally ended the transaction with device ROCHESTER. CPI8369 Information 1 pending changes rolled back; reason 01. 01 -- The commitment definition is in a state of Reset. CPF83E4 Diagnostic Commitment control ended with resources not committed. ............... 5.7.3 Manual recovery The Work with Commitment Definition (WRKCMTDFN) command allows users to manage commitment definitions for any job on the system. This command becomes particularly useful when a system or line failure causes transactions to hang while waiting for synchronization. A commitment definition reports information about a job commitment control status after commitment control has been started with either the Start Commitment Control (STRCMTCTL) command or by a program containing embedded SQL commitment statements. Display Formatted Message Text System: ROCHESTER Message ID . . . . . . . . . : SQL0918 Message file . . . . . . . . : QSQLMSG Library . . . . . . . . . : QSYS Message . . . . : ROLLBACK is required. Cause . . . . . : The activation group requires a ROLLBACK to be performed prior to running any other SQL statements. Recovery . . . : Issue a ROLLBACK CL command or an SQL ROLLBACK statement and then continue. 98 Advanced Functions and Administration on DB2 Universal Database for iSeries Using Work with Commitment Definitions This command provides detailed information about the commitment control status of an activation group. The main display may look similar to the example in Figure 5-9, where only one active commitment definition is shown. Figure 5-9 Work with Commitment Definition command display If more activation groups are involved, more commitment definitions are listed. When you choose option 5 (Display status), three more displays with further details of the commitment definition shown in Figure 5-10 through Figure 5-12 appear. Figure 5-10 Display Commitment Definition Status (Part 1 of 3) Work with Commitment Definitions System: ROCHESTER Type options, press Enter. 5=Display status 12=Work with job 14=Forced commit 16=Forced rollback ... Commitment Resync In Opt Definition Job User Number Progress 5 *DFTACTGRP P23KXC48E ITSCID06 004590 NO Bottom 3=Exit F5=Refresh F9=Command line F11=Display logical unit of work 12=Cancel F16=Sort by logical unit of work ID F24=More keys Display Commitment Definition Status ROCHESTER 05/25/01 23:03:42 Job: P23KXC48E User: ITSCID06 Number: 004590 Commitment definition . . . . . . : *DFTACTGRP Activation group . . . . . . . . : 2 Logical Unit of Work ID . . . . . : APPNET.ROCHESTER.X'F32D995711EE'.00003 Job active . . . . . . . . . . . : YES Server job . . . . . . . . . . . : Resource location . . . . . . . . : REMOTE Default lock level . . . . . . . : *CHG Role . . . . . . . . . . . . . . : State . . . . . . . . . . . . . . : RESET Date/time stamp . . . . . . . . : Resync in progress . . . . . . . : NO Number of commits . . . . . . . . : 2 Number of rollbacks . . . . . . . : 0 More... Press Enter to continue. F3=Exit F5=Refresh F6=Display resource status F9=Command line F12=Cancel Chapter 5. DRDA and two-phase commitment control 99 Figure 5-11 Display Commitment Definition Status (Part 2 of 3) Figure 5-12 Display Commitment Definition Status (Part 3 of 3) Press F6 to look more into the details of a commitment definition. A window is displayed about the status of the single resources that are protected by commitment control (Figure 5-13). Display Commitment Definition Status ROCHESTER 05/25/01 23:03:42 Job: P23KXC48E User: ITSCID06 Number: 004590 Commitment definition . . . . . . : *DFTACTGRP Activation group . . . . . . . . : 2 Heuristic operation . . . . . . . : Default journal . . . . . . . . . : Library . . . . . . . . . . . . : Notify object . . . . . . . . . . : *NONE Library . . . . . . . . . . . . : Object type . . . . . . . . . . : Member . . . . . . . . . . . . : More... F3=Exit F5=Refresh F6=Display resource status F9=Command line F12=Cancel Display Commitment Definition Status ROCHESTER 05/25/01 23:03:42 Job: P23KXC48E User: ITSCID06 Number: 004590 Commitment definition . . . . . . : *DFTACTGRP Activation group . . . . . . . . : 2 Commitment options: Wait for outcome . . . . . . . : WAIT Action if problems . . . . . . : ROLLBACK Vote read-only permitted . . . : NO Action if End . . . . . . . . . : WAIT Bottom F3=Exit F5=Refresh F6=Display resource status F9=Command line F12=Cancel 100 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-13 Display Resource Status display When you select option 1 (Record level), the system displays the local files whose records are involved in the commitment definition (Figure 5-14). Figure 5-14 Display Record Level Status display Press F11 (Display status) to view more details. In addition to the number of database changes committed, rolled back, or still pending, the display shows the lock level, the status, the journal, and commit cycle identifier for the file. Display Commitment Definition Status ROCHESTER 05/25/01 23:03:42 Job: P23KXC48E User: ITSCID06 Number: 004590 Commitment definition . . . . . . : *DFTACTGRP Activation group . . . . . . . . : 2 ,...................................................................... : Display Resource Status : : : : Type option, press Enter. : : 1=Select : : : : Opt Resource : : 1 Record level : Object level : : Conversation : : Remote file : : Remote RDB : : API : : : : Bottom : : F5=Refresh F12=Cancel : : : :.....................................................................: Display Record Level Status System: ROCHESTER Job: P23KXC48E User: ITSCID06 Number: 004590 Commitment definition . . . . . . . . : *DFTACTGRP -------------Changes-------------- File Library Member Commit Rollback Pending ORDERDTL ORDENTL ORDERDTL 0 0 1 Bottom Press Enter to continue. F3=Exit F5=Refresh F6=Display resource status F9=Command line F11=Display status F12=Cancel F16=Job menu Chapter 5. DRDA and two-phase commitment control 101 Showing an example of manual recovery is not an easy task, since there is really little chance that the transaction will be interrupted when it is in an undecided state. The two-phase commitment control critical window is very narrow. In these rare situations, the WRKCMTDFN command provides a way to complete the transaction and release all the locks. It is up to the user to determine whether to use a commit or a rollback and force the transaction boundary by using either of the following methods on the display shown in Figure 5-9 on page 98:  Option 14 (Forced commit)  Option 16 (Forced rollback) The manual recovery process may violate the alignment of the various databases in the network. Avoid this procedure if the automatic resynchronization is still possible by restoring the communication among the systems. Force the end of the transaction if the environment where the transaction was running before the failure cannot be restored, such as in the case of data loss or other serious system or network outages. 5.8 Application design considerations This section describes how application developers should design their programs to fully exploit the flexibility provided by DRDA-2 in a distributed environment. 5.8.1 Moving from DRDA-1 to DRDA-2 The essential advantage of the Distributed Unit of Work (DRDA-2) over Remote Unit of Work (DRDA-1) is represented by the ability to access different locations within the same transaction, allowing much more flexibility to database and application design. The flexibility offered by DRDA-2 introduces more complexity in regard to handling the connections within your applications. Multiple connections may be active at the same time, and you may need to determine whether your application is already connected to a specific location. To obtain this information, check the SQLCODE after issuing a CONNECT statement directed to that particular location. If you receive SQLCODE = -842, this means that the connection is already active and that you may need to perform a SET CONNECTION to establish that location as the current connection. If you receive SQLCODE = 0, the connection has just been activated and becomes the current connection. Performance in a DRDA-2 environment The higher the number is of the connections concurrently active, the higher the impact is on the application and system performance. Design your applications by trying to find the right balance between keeping your connections active, so that you do not need to restart them when you need them, and releasing the idle connections to reduce the system overhead. The behavior of the initial connection depends on the programming model used by your application:  OPM programs: In a DRDA-1 program, each initial call of a program implicitly connects to the database specified in the RDB parameter of the CRTSQLxxx command. When the program terminates its execution, the connection is destroyed. If the same program is called several times within a job, the implicit connection is established each time. In a DRDA-1 program, you can count on this behavior and avoid coding the initial connection. 102 Advanced Functions and Administration on DB2 Universal Database for iSeries  ILE programs: If ILE programs are created using the default parameters, the initial connection to the location specified in the RDB parameter will occur once in the life of an activation group. The connection will last as long as the activation group exists. In general, this behavior depends on the value of the CLOSQLCSR parameter, which defaults to *ENDACTGRP. If your program runs in the default activation group or in a named activation group, you may need to check for existing connections. 5.9 DRDA-2 program examples This section gives three examples of programs using DRDA-2 connection management and two-phase commitment control. The programs are taken from the Order Entry scenario. 5.9.1 Order Entry main program The main program of our application only has the purpose of establishing the connections to the local and remote databases and of calling the various subprograms. The design choice of establishing all the necessary connections at the beginning allows the developers of the subprograms to rely on the existing connections. In the subprograms, there are only SET CONNECTION statements. The following code listing shows a COBOL version of the main program: IDENTIFICATION DIVISION. PROGRAM-ID. T4249MAIN. * * This is the main program of the order entry application. * The program establishes all the connections, so that the * various sub-programs will need to issue only SET CONNECTION * statements. At the end of the cycle, this program will * release all the connections and commit all the changes. * ENVIRONMENT DIVISION. * DATA DIVISION. * WORKING-STORAGE SECTION. * * The error flag parameter is used by the various sub-programs * to communicate a failure. * No Errors: ERRFLG = 0 * Failure : ERRFLG = 1 * 01 ERR-FLG PIC X(1). 01 TOTAMT PIC S9(11) PACKED-DECIMAL. 01 CUSNBR PIC X(5). 01 ORDNBR PIC X(5). * EXEC SQL INCLUDE SQLCA END-EXEC. PROCEDURE DIVISION. * EXEC SQL CONNECT TO RCHASM02 END-EXEC. * * Establish connections and check for successfull connect * IF SQLCODE NOT = 0 AND SQLCODE NOT = -842 THEN Chapter 5. DRDA and two-phase commitment control 103 DISPLAY "Error connecting to RCHASM02" END-IF. EXEC SQL CONNECT TO RCHASM03 END-EXEC IF SQLCODE NOT = 0 AND SQLCODE NOT = -842 THEN DISPLAY "Error connecting to RCHASM03" END-IF. * Calling the restart procedure, that checks for * incomplete orders and deletes them. CALL "T4249RSTR" USING ERR-FLG. IF ERR-FLG = 0 THEN * Calling the insert order header program CALL "T4249CINS" USING CUSNBR, ORDNBR, ERR-FLG IF ERR-FLG = 0 THEN * Calling the insert detail rows program CALL "T4249RIDT" USING CUSNBR, ORDNBR, ERR-FLG IF ERR-FLG = 0 THEN * Calling the finalize order program CALL "T4249FNLO" USING CUSNBR, ORDNBR, TOTAMT, ERR-FLG IF ERR-FLG = 0 THEN STOP RUN END-IF END-IF END-IF END-IF. * In case of errors, perform a ROLLBACK EXEC SQL ROLLBACK END-EXEC. STOP RUN. 5.9.2 Deleting an order This program may be invoked either at the beginning of the application execution (if some incomplete orders are found for the user) or at the end of it (if the user requests a cancellation of the order). The program scans the order detail rows and, for each item, it updates the quantity in the stock file at the remote site. At the end, the program deletes the order header. This operation causes all of the detail rows to go away as well because of the CASCADE rule that we implemented. If no errors are encountered in the process, the program commits the entire transaction. The following code listing shows a COBOL implementation of this procedure: IDENTIFICATION DIVISION. PROGRAM-ID. T4249CORD. * * This program scans all the details referring to the input * order number; it updates the available quantity of each * detail in the remote STOCK file. * ENVIRONMENT DIVISION. * DATA DIVISION. * WORKING-STORAGE SECTION. * 01 H-ORDQTY PIC S9(5) PACKED-DECIMAL. 01 H-PRDQTA PIC S9(5) PACKED-DECIMAL. 104 Advanced Functions and Administration on DB2 Universal Database for iSeries 01 H-PRDNBR PIC X(5). 01 H-ORHNBR PIC X(5). * EXEC SQL INCLUDE SQLCA END-EXEC. * EXEC SQL DECLARE DETAIL CURSOR FOR SELECT PRDNBR ,ORDQTY FROM ORDENTL/ORDERDTL WHERE ORHNBR = :h-ORHNBR END-EXEC. * LINKAGE SECTION. * 01 WK-ORHNBR PIC X(5). 01 ERR-FLG PIC X(1). * PROCEDURE DIVISION USING WK-ORHNBR ERR-FLG. * MOVE WK-ORHNBR TO H-ORHNBR. MOVE "0" TO ERR-FLG. * EXEC SQL SET CONNECTION RCHASM03 END-EXEC. * IF SQLCODE = 0 THEN EXEC SQL OPEN DETAIL END-EXEC ELSE MOVE "1" TO ERR-FLG END-IF. * * Read each detail's ordered quantity and update STOCK * PERFORM UNTIL SQLCODE NOT = 0 OR ERR-FLG NOT = 0 * EXEC SQL FETCH DETAIL INTO :h-PRDNBR ,:h-ORDQTY END-EXEC * IF SQLCODE = 0 THEN * EXEC SQL SET CONNECTION RCHASM02 END-EXEC * IF SQLCODE = 0 THEN * EXEC SQL UPDATE ORDENTR/STOCK SET PRDQTA = PRDQTA + :h-ORDQTY WHERE PRDNBR = :h-PRDNBR END-EXEC IF SQLCODE NOT = 0 THEN MOVE "1" TO ERR-FLG END-IF * ELSE MOVE "1" TO ERR-FLG END-IF * EXEC SQL SET CONNECTION RCHASM03 END-EXEC * IF SQLCODE NOT = 0 THEN Chapter 5. DRDA and two-phase commitment control 105 MOVE "1" TO ERR-FLG END-IF * END-IF END-PERFORM. * IF SQLCODE < 0 AND ERR-FLG = 0 THEN MOVE "1" TO ERR-FLG END-IF. * IF ERR-FLG = 0 THEN EXEC SQL DELETE FROM ORDENTL/ORDERHDR WHERE ORHNBR = :h-ORHNBR END-EXEC * IF SQLCODE NOT = 0 THEN MOVE "1" TO ERR-FLG END-IF END-IF. * IF ERR-FLG = "0" THEN EXEC SQL COMMIT END-EXEC ELSE EXEC SQL ROLLBACK END-EXEC END-IF. * GOBACK. 5.9.3 Inserting the detail rows The following example is an excerpt of the Insert Detail program. This fragment of code shows only the statements that are relevant to a DRDA-2 connection and two-phase commitment control. The full implementation of this program (INSDET) can be found in Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503, since this program activates a remote stored procedure. This program inserts the detail order item in the ORDERDTL file at the local database and updates the quantity in the inventory by calling a remote stored procedure, which accesses the STOCK file at the remote system: This program excerpt (from program INSDET) ==================== shows the vital statements for -- DRDA-2 connection management and -- two-phase commitment control ... ................... ................... ROCHESTER is local database system ZURICH is remote database system ................... ................... ................... * **************************************************************** * The following Two-Phase Commit for local and remote system * * is only executed, if the conditions are met ... * * The first COMMIT after one DUW therefore is done only after * * the ordered quantity has been deducted from STOCK file on * * the remote system and the first order record has been * 106 Advanced Functions and Administration on DB2 Universal Database for iSeries * inserted correctly in the ORDERDTL file on the local system.* * Every item record for an order is committed, because * * of releasing the record lock on STOCK file. * * Note: SQL COMMIT in the program starts commitment control * * for the activation group automatically: * **************************************************************** * C/EXEC SQL C+ COMMIT C/END-EXEC ................... ................... ................... * **************************************************************** * -- Connection to the REMOTE database: -- * * At start of the program DRDA connection mgmt. establishes * * connection automatically to the remote sys. as according to * * the relational database specified in the compil. parameter * * in command CRTSQLxxx RDB(....). Therefore this program * * is connected to remote database ZURICH already. * * For further remote re-connections SET CONNECTION is used: * **************************************************************** * ................... ................... C/EXEC SQL C+ SET CONNECTION ZURICH After 1.connect C/END-EXEC remote ................... ................... ................... * **************************************************************** * The CALL of the stored procedure (at the remote system) * * is prepared and executed. * * It updates the STOCK file, and searches for alternatives, * * if necessary. * **************************************************************** Code for stored procedure, see chapter "Stored Procedures". ................... ................... ................... ................... **************************************************************** * -- Connection to the LOCAL database: -- * * At this point, connection to the local database is estab- * * lished. For the first time in the execution of the program * * the CONNECT statement has to be executed. * * The connection to the local database then goes to dormant * * state, after connecting to the remote DB (above) again. * * For further local re-connections SET CONNECTION is used: * **************************************************************** * C *IN51 IFEQ '0' 1st connect C/EXEC SQL -- local C+ CONNECT TO ROCHESTER C/END-EXEC C MOVE '1' *IN51 After 1.connect C ELSE local Chapter 5. DRDA and two-phase commitment control 107 C/EXEC SQL C+ SET CONNECTION ROCHESTER C/END-EXEC C END * **************************************************************** * An order detail record is inserted in the local database, * * if referential integrity rules are not violated, i.e. * * the primary key of ORDERDTL file must be unique, and/or a * * corresponding order number must exist in the ORDERHDR * * parent file. Otherwise an SQL error message is sent from * * database management: * **************************************************************** * C/EXEC SQL C+ INSERT INTO ORDENTL/ORDERDTL (ORHNBR, PRDNBR, ORDQTY, ORDTOT) C+ VALUES(:ORDNBR, :DPRDNR, :DQUANT, :DITTOT) C/END-EXEC ................... ................... ................... C SQLCOD IFEQ -530 RI Constraint ................... ................... * **************************************************************** * If ORDERHDR parent file does not have corresponding * * order number (RI rule violated), * * update of order quantity in STOCK file on remote system * * is rolled back by two-phase commitment control management: * **************************************************************** * C/EXEC SQL C+ ROLLBACK C/END-EXEC ................... ................... ................... C GOTO BEGIN ................... ................... ................... ................... * **************************************************************** * If PF3 is pressed, order entry has finished. All * * connections are released in order to save on resources: * **************************************************************** * C/EXEC SQL C+ RELEASE ALL C/END-EXEC * **************************************************************** * The following COMMIT statement activates previous RELEASE: * **************************************************************** * C/EXEC SQL C+ COMMIT 108 Advanced Functions and Administration on DB2 Universal Database for iSeries C/END-EXEC ................... ................... ................... 5.10 DRDA over TCP/IP So far, we have dealt with either DRDA over SNA or with the DRDA implementation on the iSeries server in general. This section discusses the iSeries server implementation of DRDA over TCP/IP. The requirement for DRDA over TCP/IP stems from the explosive growth in usage of this protocol and the fact that many large accounts are already running TCP/IP or are moving all new applications to TCP/IP. The support for DRDA over TCP/IP on the iSeries server has been made available with OS/400 version V4R2M0. The implementation of DRDA over TCP/IP up to V4R5 supports DRDA level 1. This satisfies the UNIX or Windows NT client and DataPropagator needs. It is in V5R1 that the implementation of DRDA over TCP/IP supports DRDA level 2. The DRDA application server is based on multiple connection-oriented server jobs running in the QSYSWRK subsystem. A DRDA background program (listener) listens for TCP connect requests on well-known DRDA port 446. The DRDA server jobs are defined by prestart job entries. Once the application requester connects to the listener at the AS, the listener issues a request to wake up a prestarted server job. The listener then passes the socket descriptor to the server job, and any further communication occurs directly between the client application and server job. If you use the SNA implementation of DRDA, you need to configure the controller that governs the communication between the local and the remote system. Then, you need to refer to this controller in the device description parameter of the ADDRDBDIRE command. If you use the TCP/IP implementation of DRDA, you can refer to the IP address of the remote server on the Remote Location Name parameter of the ADDRDBDIRE command, and specify the port if it is other than the default DRDA port of 446. The iSeries server always uses the default port. This eliminates all of the complexity of configuring the communications between the two servers. 5.10.1 Configuring DRDA over TCP/IP This section covers the configuration process for DRDA over TCP/IP between two iSeries servers. The system located in Rochester takes the role of the application server, and the system in Zurich accesses the database located on the Rochester machine as an application requester. The configuration process for this simple scenario consists of these phases:  Setting up the application server: This phase involves the following configuration activities: a. Configuring TCP/IP on the AS system b. Setting the attributes for the DDM server job c. Starting the DDM server job  Setting up the application requester: This phase involves the following configuration activities: a. Configuring TCP/IP on the AR system b. Adding the AS system to the Relational Database Directory c. Defining the user profile under which the connect to the AS is being done Chapter 5. DRDA and two-phase commitment control 109 Configuring TCP/IP on the application server First you have to make sure that there is an appropriate host table entry for the local ROCHESTER machine. At this point, you may also want to add a host name for the AR machine located in Zurich. An alternative approach is to specify a remote domain name server in your TCP/IP configuration for automatic host name resolution. In our example, we outline the steps required for defining the TCP/IP host table entry: 1. At the CL command line, enter the GO CFGTCP command. 2. The TCP/IP configuration menu is shown. Here, select option 10. 3. The Work with TCP/IP Host Table Entries display is shown. Check that there is a valid entry for the local host. 4. Choose the ADD option, and enter the Internet address of the remote AR in the column named Internet Address (Figure 5-15). Figure 5-15 Working with the host table entries 5. This invokes the Add TCP/IP Host Table Entry (ADDTCPHTE) command. On this command, you can define the name and the alias names of the remote host (Figure 5-16). Figure 5-16 Adding the host table entries Setting the attributes for the DDM server job Before you start the server job on the AS system, you can change the job's attributes by using the Change DDM TCP/IP Attributes (CHGDDMTCPA) command. There are two attributes that can be changed with this command:  AUTOSTART: Specifies whether to automatically start the DDM server when TCP/IP is started. This parameter takes effect the next time the STRTCP command is run. Work with TCP/IP Host Table Entries System: ROCHESTER Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 7=Rename Internet Host Opt Address Name 1 10.10.10.2 _ 10.10.10.1 ROCHESTER _ 127.0.0.1 LOOPBACK LOCALHOST Add TCP/IP Host Table Entry (ADDTCPHTE) Type choices, press Enter. Internet address . . . . . . . . > '10.10.10.2' Host names: Name . . . . . . . . . . . . . ZURICH_______________________________ _________________________________________________________________________ _________________________________________________________________________ ___________________________________________________ + for more values _ Text 'description' . . . . . . . HOST TABLE ENTRY FOR AS/400 SYSTEM AT ZURICH______________ 110 Advanced Functions and Administration on DB2 Universal Database for iSeries  PWDRQD: Specifies whether client systems are required to have a password in addition to a user ID on incoming connection requests to this system as a server. This parameter takes effect on the next DRDA or DDM connect request over TCP/IP. Now, follow these steps: 1. On the CL command line, enter the CHGDDMTCPA command, and press PF4 so you can see the current settings. 2. Change the DDM server job attributes to the values shown in Figure 5-17. Figure 5-17 Changing the DDM server job attributes Starting the DDM server job Use the STRTCPSVR SERVER(*DDM) command to start the DDM server job. Now you can find a new job, QRWTLSTN, running in the QSYSWRK subsystem. This is the listener job waiting for connect requests on port 446. Configuring TCP/IP on the application requester Configuring TCP/IP on the application requester involves exactly the same steps as for the application server. Refer to “Configuring TCP/IP on the application server” on page 109. Adding the AS system to the relational database directory Probably the most important step in DRDA over TCP/IP configuration is adding the relational database entry for the remote database to which you want to connect. The relational database entry defines the location of the remote server and the method of connection. 1. Use the ADDRDBDIRE command to add the RDB entry for the application server located in Rochester (Figure 5-18). Change DDM TCP/IP Attributes (CHGDDMTCPA) Type choices, press Enter. Autostart server . . . . . . . . AUTOSTART *YES Password required . . . . . . . PWDRQD *YES Note: The value of the PWDRQD attribute has some implications for the Change Relational Database Directory Entry (CHGRDBDIRE) and Remove Relational Database Directory Entry (RMVRDBDIRE) commands. A bit in the *LOCAL RDB directory entry is used to store if a password is required to access this iSeries server by an AR. An inquiry message CPA3E01 is issued if the local entry is changed to a non-local entry or if the *LOCAL entry is deleted. The following text is associated with this message: Removing the *LOCAL directory entry may cause loss of configuration data. (C G) We strongly recommend that you record the current setting before you proceed. Chapter 5. DRDA and two-phase commitment control 111 Figure 5-18 Adding the relational database entry 2. On the Relational Database parameter of the ADDRDBDIRE command, specify the name of the database on the remote server. 3. On the Remote Location parameter of the ADDRDBDIRE command, specify the Internet address of the remote application server. If you already specified the Internet address of the remote server in TCP/IP host table entry, you can use the host name in place of the Internet address. In our example, we specified ROCHESTER rather than 10.10.10.1. This allows you to have some flexibility if, for some reason, you change the Internet address of the remote server. 4. On the Type parameter of the ADDRDBDIRE command, specify the value *IP. This signifies to the local system that you are using the TCP/IP implementation of DRDA to connect to the remote server. Note that *SNA is the default setting for this parameter. 5. On the PORT parameter, specify the default value *DRDA. Some servers, such as DB2 Universal Database, use a different port. You need to find out which port to specify from the documentation for the specific server product and set that port number. Defining a user profile for DRDA over the TCP/IP connection You can use the Add Server Authentication Entry (ADDSVRAUTE) command to add authentication information for a given user under which the connect is being done. The user ID and password are associated with the user profile and remote application server. This information flows to the AS each time the AR issues a connect request. 1. Make sure that you have *SECADM special authority, as well as *OBJMGT and *USE authorities, to the user profile to which the server authentication entry is being added. 2. Check whether the retain server security data (QRETSVRSEC) system value is set to 1. If the value is 0 (do not retain data), the password is not saved in the entry. 3. Type the ADDSVRAUTE command and press F4. 4. Add the authentication entry shown in Figure 5-19 and Figure 5-20. Add RDB Directory Entry (ADDRDBDIRE) Type choices, press Enter. Relational database . . . . . . ROCHESTER__________ Remote location: Name or address . . . . . . . 10.10.10.1__________________________ _________________________________________________________________________ _________________________________________________________________________ _________________________________________________ Type . . . . . . . . . . . . . *IP_ *SNA, *IP Text . . . . . . . . . . . . . . RDB ENTRY FOR THE AS/400 SYSTEM IN RO CHESTER 112 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-19 Adding the authentication entry Figure 5-20 Adding the authentication entry Note: Make sure the server name is in uppercase. Along with ADDSVRAUTE command, there are two additional commands available:  Change Server Authentication Entry (CHGSVRAUTE): Allows you to change the user ID and password for an authentication entry added by the ADDSVRAUTE command.  Remove Server Authentication Entry (RMVSVRAUTE): Allows you to remove authentication entries added by the ADDSVRAUTE command. 5.10.2 Examples of using DRDA over TCP/IP The discussion in the previous section contains details about the method of configuring DRDA over TCP/IP. This section discusses how to actually access the data in the remote server. It examines two different scenarios:  Using Interactive SQL  Using C programming Add Server Auth Entry (ADDSVRAUTE) Type choices, press Enter. User profile . . . . . . . . . . > JAREK Name, *CURRENT Server . . . . . . . . . . . . . > ROCHESTER____________________________ ________________________________________________________________________ ________________________________________________________________________ User ID . . . . . . . . . . . . *USRPRF______________________________ ________________________________________________________________________ ____________________________________________________________________... More... Add Server Auth Entry (ADDSVRAUTE) Type choices, press Enter. User password . . . . . . . . . PASSWORD_____________________________ ________________________________________________________________________ ________________________________________________________________________ ____________________________________________________________________... Bottom Chapter 5. DRDA and two-phase commitment control 113 Interactive SQL example Probably the easiest way to take advantage of a DRDA connection to a remote database is to use Interactive SQL. The following simple SQL session documents all major points to remember while running DRDA over TCP/IP: 1. Start Interactive SQL with the Start SQL (STRSQL) command. Make sure that the commitment control level you are running at is at least *CHG. At the SQL prompt, press the F13 key, and then select option 1. Change the Commitment Control Attribute to *CHG. Return to the SQL session by pressing F3. Now you are ready to test the SQL statements shown in Figure 5-21. Figure 5-21 Using DRDA over TCP/IP with SQL The following list explains the SQL statements that are numbered in Figure 5-21: 1 When you start your Interactive SQL session, you are connected, by default, to your local database. 2 The initial connection to the local system is protected by two-phase commit protocols. If a subsequent connection is made to a system that has only RUW capability, that connection is read-only. Therefore, you cannot perform any committable transactions, including automatically creating an SQL package for the Interactive SQL program, if the connection is to a non-iSeries server and this is the first time the connection is attempted. The solution to this is to drop the connection to the local database before you connect to the remote server. You may use the SQL statement RELEASE ALL to accomplish this task. When you execute this command, the resources held by any database in the system are released and any pending transaction is rolled back. Enter SQL Statements Type SQL statement, press Enter. Session was saved and started again. STRSQL parameters were ignored. Current connection is to relational database ZURICH.1 > release all 2 RELEASE of all relational databases completed. > commit 3 Commit completed. > connect to rochester Current connection is to relational database ROCHESTER. 4 > select * from ordapplib/customer SELECT statement run complete. > update ordapplib/customer set cuscrd = cuscrd * 1.1 5 where cusnbr = '99995' 1 rows updated in CUSTOMER in ORDAPPLIB. > select * from ordapplib/customer SELECT statement run complete. > commit 6 Commit completed. > call caseproc ('99995',0) 7 CALL statement complete. > select * from ordapplib/customer SELECT statement run complete. > commit Commit completed. ===> Bottom 114 Advanced Functions and Administration on DB2 Universal Database for iSeries 3 The COMMIT statement is required to move the connection from a released to an unconnected state. See the discussion in 5.3.2, “Connection states” on page 88, for more information. 4 After you establish the connection to the remote AS, make sure that the connection type is 1. Place the cursor on the connection message, and press F1. A message display with the information shown in Figure 5-22 appears. Figure 5-22 Connection type for DRDA over TCP/IP 5 Because this is the first *RUW connection, committable updates can be performed. 6 Changes to the remote database should be committed before the connection is dropped. 7 One of the most powerful features of the DRDA architecture is the ability to run remote stored procedures. The stored procedure in this step performs exactly the same action as the update statement in 5. For a detailed discussion on DB2 UDB for iSeries stored procedure implementation, refer to Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. 2. To finish the SQL session, press F3. You may want to save your current session by using option 1. ILE C example The way you code your application program, which accesses remote AS with DRDA over TCP/IP support, is similar to the procedure described for the Interactive SQL session. The following example code highlights the most important considerations: 1. Compile your program with an isolation level of *CHG or above. If you have a connection to the remote AS at compile time, you may create an appropriate SQL package on the target system. The following Create C-embedded SQL (CRTSQLCI) command creates the program object on the local system and the SQL package at the remote server: CRTSQLCI OBJ(ORDAPPLIB/SQLCNCT) SRCMBR(SQLCNCT) RDB(ROCHESTER) OBJTYPE(*PGM) OUTPUT(*PRINT) RDBCNNMTH(*RUW) 2. The ILE C code is listed here: Additional Message Information Message ID . . . . . : SQL7971 Severity . . . . . . . : 00 Message Type . . . . : Information Message . . . : Current connection is to relational database ROCHESTER. Cause . . . . : The product identification is QSQ04020, the server class name is QAS, and the user ID is JAREK. The connection method used is *DUW The connection type is 1. A list of the connection types follows: -- Type 1 indicates that committable updates can be performed and either the connection uses an unprotected conversation, is a connection to an application requester driver program using *RUW connection method, or is local connection using *RUW connection method. -- Type 2 indicates that the conversation is unprotected and no committable updates can be performed. -- Type 3 indicates that the conversation is protected and it is unknown if committable updates can be performed. -- Type 4 indicates that the conversation is unprotected and it is unknown More... Chapter 5. DRDA and two-phase commitment control 115 #include #include #include #include EXEC SQL BEGIN DECLARE SECTION; char Chr_CusNbr[ 5 ]; char Chr_CusNam[ 20 ]; char Chr_CusTel[ 15 ]; char Chr_CusFax[ 15 ]; char Chr_CusAdr[ 20 ]; char Chr_CusCty[ 20 ]; char Chr_CusZip[ 5 ]; decimal( 11,2 ) Nmpd_CusCrd; decimal( 11,2 ) Nmpd_CusTot; EXEC SQL END DECLARE SECTION; EXEC SQL INCLUDE SQLCA; void main() { char Chr_Commit; printf( "Please enter the value for Customer Number :\n" ); gets( Chr_CusNbr ); printf( "Please enter the value for Customer Name :\n" ); gets( Chr_CusNam ); printf( "Please enter the value for Customer Tel :\n" ); gets( Chr_CusTel ); printf( "Please enter the value for Customer Fax :\n" ); gets( Chr_CusFax ); printf( "Please enter the value for Customer Address :\n" ); gets( Chr_CusAdr ); printf( "Please enter the value for Customer City :\n" ); gets( Chr_CusCty ); printf( "Please enter the value for Customer Zip :\n" ); gets( Chr_CusZip ); printf( "Please ener ythe value for Customer Credit Limit :\n" ); scanf( "%D(11,2)", &Nmpd_CusCrd ); EXEC SQL release all; 1 if ( sqlca.sqlcode != 0 ) { printf( "Error occured in the release of databases\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); } printf( "Released all Databases...\n" ); EXEC SQL commit; 2 if ( sqlca.sqlcode != 0 ) { printf( "Error occured in commit release of database\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); 116 Advanced Functions and Administration on DB2 Universal Database for iSeries } EXEC SQL connect to ROCHESTER; 3 if ( sqlca.sqlcode != 0 ) { printf( "Error occured in Connecting to Database\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); } printf( "Successfully connected to ROCHESTER..\n"); EXEC SQL commit; if ( sqlca.sqlcode != 0 ) { printf( "Error occured in commit of connection\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); } printf( "Commited the Connection...\n" ); EXEC SQL call 4 ordapplib/inscst( :Chr_CusNbr, :Chr_CusNam, :Chr_CusTel, :Chr_CusFax, :Chr_CusAdr, :Chr_CusCty, :Chr_CusZip, :Nmpd_CusCrd ); if ( sqlca.sqlcode != 0 ) { printf( "Error occured in calling stored procedure\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); EXEC SQL rollback; if ( sqlca.sqlcode != 0 ) { printf( "Error occured in rollback\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); } printf( "Rollback Complete...\n" ); } else { EXEC SQL commit; Chapter 5. DRDA and two-phase commitment control 117 if ( sqlca.sqlcode != 0 ) { printf( "Error occured in commit\n" ); printf( "The SQLCODE is %d\n", sqlca.sqlcode ); printf( "The Error Message :\n" ); printf( "%s\n", sqlca.sqlerrmc ); exit( -1 ); } printf( "Commit Complete...\n" ); } exit(0); } 5.10.3 Troubleshooting DRDA over TCP/IP DRDA over TCP/IP works fine until someone changes something. While handling the problems, you need to be single-minded about isolating them. First, ask yourself these simple questions:  Is the server job running on the application server? Make sure that QRWTLSTN is running in the QSYSWRK subsystem. Start the NETSTAT command, and select option 3 to check whether the listener job listens on the well-known port 446.  Are you authorized to use the connection? Does the server require a password along with the user ID on the connection request? If a password is needed, add your profile by using the ADDSVRAUTE command to the server authorization entry list.  Does your connection permit committable updates? Use Interactive SQL to check the connection type to the remote application server. Refer to “Interactive SQL example” on page 113, for a detailed discussion on this subject. If you went over this simple checklist and still encounter problems with your DRDA over TCP/IP connection, it is time to take a more systematic approach: 1. On the AS system, find the prestart job that is servicing your requests. When you start the listener job with the STRTCPSVR command, one or more prestart jobs are started in the QSYSWRK subsystem. The name of this prestart job is QRWTSRVR, and the user profile under which the job runs initially is QUSER. When your request to start a connection is accepted by the prestart job, it swaps QUSER to your user profile. The easiest way to identify the fully-qualified name for the prestart job servicing your requests is to look into the history log. There should be a log entry pertaining to your user ID (Figure 5-23). Notes:  Disconnect from the local database since it is protected by a two-phase commit. If you have a connection to a *DUW capable database, all subsequent connections to *RUW capable databases are read-only.  The COMMIT statement is needed to change the local database status from released to unconnected.  Connect to the remote AS using DRDA over TCP/IP. The connection method is *RUW, and committable updates are permitted.  Call the remote stored procedure. This procedure inserts a new record into the customer file. 118 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-23 Identifying a server job servicing your profile An alternative method is to use the WRKACTJOB command. 2. Once you identify your prestart job, you can start a service job with the STRSRVJOB command (Figure 5-24). Figure 5-24 Starting a service job 3. Enter the STRDBG command, and press F4. Change the Update production Files parameter to *YES (Figure 5-25). Figure 5-25 Starting debug for the prestart job 4. If the connection to the AS is still active, you can check the AR-AS interaction by looking at the job log of the prestarted job that is servicing your requests. Use the following command on the AS to display the job log: DSPJOBLOG JOB(034509/QUSER/QRWTSRVR) Display History Log Contents Job 034557/QUSER/QRWTSRVR started on 12/17/01 at 11:44:41 in subsystem QSYSWRK Job 034558/QUSER/QRWTSRVR started on 12/17/01 at 11:44:41 in subsystem QSYSWRK DDM job 034509/QUSER/QRWTSRVR servicing user JAREK on 12/17/01 at 11:44:42. Start Service Job (STRSRVJOB) Type choices, press Enter. Job name . . . . . . . . . . . . QRWTSRVR Name User . . . . . . . . . . . . . QUSER Name Number . . . . . . . . . . . . 034509 000000-999999 Start Debug (STRDBG) Type choices, press Enter. Program . . . . . . . . . . . . PGM *NONE Library . . . . . . . . . . . + for more values Default program . . . . . . . . DFTPGM *PGM Maximum trace statements . . . . MAXTRC 200 Trace full . . . . . . . . . . . TRCFULL *STOPTRC Update production files . . . . UPDPROD *YES Chapter 5. DRDA and two-phase commitment control 119 Figure 5-26 Job log of the prestart job Looking at the job log entries (Figure 5-26), you can now see that you were trying to call the INSCST stored procedure with an incorrect number of parameters. 5. Close your connection to the AS system. Since the prestart job was being serviced, the job log associated with this job is saved in a spooled file. This spooled file is stored with your user ID. Note: The job log is also saved when the system detects that a serious error occurred in processing the request that ended the connection. 6. Use the Work with Spooled File (WRKSPLF) command to display the content of the spooled file (Figure 5-27). Figure 5-27 Spooled file content 7. Stop debugging with the End Debug (ENDDBG) command, and stop the service job with the End Server Job (ENDSRVJOB) command. Display All Messages System: ROCHESTER Job . . : QRWTSRVR User . . : QUSER Number . . . : 034509 Job 034557/QUSER/QRWTSRVR started on 12/17/01 at 11:44:41 in subsystem QSYSWRK in QSYS. Job entered system on 12/17/01 at 11:44:41. Target job assigned to handle DDM connection started by source system ove TCP/IP. ACGDTA for 034557/QUSER/QRWTSRVR not journaled; reason 1. Local relational database accessed by ZURICH. Number of parameters on CALL not valid for procedure INSCST in ORDAPPLIB. 5769SS1 V4R2M0 980228 Display Job Log ROCHESTER 12/17/01 14:28:56 Page 1 Job name . . . . . . . . . . : QRWTSRVR User . . . . . . : QUSER Number . . . . . . . . . . . : 034558 Job description . . . . . . : QUSER Library . . . . . : QGPL MSGID TYPE SEV DATE TIME FROM PGM LIBRARY INST TO PGM LIBRARY INST CPF1124 Information 00 12/17/01 11:44:41 QWTPIIPP QSYS 0599 *EXT *N Message . . . . : Job 034558/QUSER/QRWTSRVR started on 12/17/01 at 11:44:41 in subsystem QSYSWRK in QSYS SQL0440 Diagnostic 30 12/17/01 14:26:56 QSQXCUTE QSYS 1B9A QSQXCUTE QSYS 1B9A Message . . . . : Number of parameters on CALL not valid for procedure INSCST in ORDAPPLIB. Cause . . . . . : The number of parameters specified on a CALL statement is not the same as the number of parameters declared for procedure INSCST in ORDAPPLIB. Recovery . . . : Specify the same number of parameters on the CALL as on the procedure definition. Try the request again. 120 Advanced Functions and Administration on DB2 Universal Database for iSeries 5.11 DB2 Connect access to an iSeries server via TCP/IP Since OS/400 V4R2, it is possible to connect to an iSeries server from DB2 Connect on a Windows machine using TCP/IP. The following example employs OS/400 V4R5 and the DB2 UDB for Windows NT Version 7.1. 5.11.1 On the iSeries server The following process lists the necessary steps to be performed on the iSeries server: 1. Verify that the TCP/IP stack is working correctly. To do this, obtain the IP address of the iSeries server or hostname, and ping the iSeries server from the DB2 Connect machine. To find the IP address, go to the Configure TCP/IP menu. Enter CFGTCP, and choose Work with TCP/IP interface. The IP address should be displayed as shown in Figure 5-28. Figure 5-28 Work with TCP/IP Interfaces 2. To find the hostname, go to the Configure TCP/IP menu, and choose Work with TCP/IP host table entries. You should find the hostname that has been assigned to the IP address. 3. You need a relational database (RDB) name for the iSeries server. If it has already been created, you can display it by using the DSPRDBDIRE command. The RDB with a location of *LOCAL is the one you need, as shown in Figure 5-29. If it has not been created, use the ADDRDBDIRE command to add the RDB entry. For example, the following command would add an RDB entry named DALLASDB: ADDRDBDIRE RDB(DALLASDB) RMTLOCNAME(*LOCAL) Work with TCP/IP Interfaces System: AS23 Type options, press Enter. 1=Add 2=Change 4=Remove 5=Display 9=Start 10=End Internet Subnet Line Line Opt Address Mask Description Type 10.10.10.10 255.255.255.0 TRNLINE *TRLAN Bottom F3=Exit F5=Refresh F6=Print list F11=Display interface status F12=Cancel F17=Top F18=Bottom Chapter 5. DRDA and two-phase commitment control 121 Figure 5-29 Display Relational Database Directory Entries 4. You must create a collection called NULLID. The reason for this is that the utilities shipped with DB2 Connect and DB2 UDB store their packages in the NULLID collection. Since it does not exist by default in the iSeries server, you must create it using the following command: CRTLIB LIB(NULLID) 5. Products that support DRDA automatically perform any necessary code page conversions at the receiving system. For this to happen, both systems need a translation table from their code page to the partner code page. The default Coded Character Set Identifier (CCSID) on the iSeries server is 65535. Since DB2 Connect does not have a translation table for this code page, you need to change the individual user profiles to contain a page. You need to change the individual user profiles to contain a CCSID that can be converted properly by DB2 Connect. For US English, this is 037. For other languages, see DB2 Connect Personal Edition Quick Beginning, GC09-2967. The following command changes the CCSID for an individual user profile to 037: CHGUSRPRF userid CCSID(037) 6. Verify that you are using the default port 446 for DRDA service. To do this, go to the Configure TCP/IP menu (CFGTCP), select Configure Related Tables, and then select Work with service table entries. Verify that the DRDA service is set for port 446, as shown in Figure 5-30. Display Relational Database Directory Entries Position to . . . . . . Type options, press Enter. 5=Display details 6=Print details Relational Remote Option Database Location Text AS23 *LOCAL DB entry for local AS23 AS24 10.10.10.20 RBD Entry for AS24 Bottom F3=Exit F5=Refresh F6=Print list F12=Cancel (C) COPYRIGHT IBM CORP. 1980, 2000. 122 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-30 Work with Service Table Entries 7. The Distributed Data Management (DDM) job must be started for DRDA to work. If you want the DDM job to be automatically started whenever TCP/IP is started, you can change the attributes of the DDM job using the CHGDDMTCPA command and set the Autostart server parameter to *YES. 8. If you choose not to autostart the server, issue the following command to start the DDM server job: STRTCPSVR(*DDM) 9. Make sure you have user IDs defined on the iSeries server for the users that will be connecting. 5.11.2 On the workstation The following steps are required on DB2 UDB: 1. Launch Client Configuration Assistant (db2cca from the command prompt). 2. Click the Add button to add a new data source. 3. On the Source tab, choose Manual configuration, and click Next. 4. On the Protocol tab, choose TCP/IP for protocol, and select the item The database physical residence on a host or AS/400. Then, select the option Connect directly to the server, as shown in Figure 5-31. Click Next. Work with Service Table Entries System: AS23 Type options, press Enter. 1=Add 4=Remove 5=Display Opt Service Port Protocol drda 446 udp echo 7 tcp echo 7 udp exec 512 tcp finger 79 tcp finger 79 udp ftp-control 21 tcp ftp-control 21 udp ftp-data 20 tcp ftp-data 20 udp gopher 70 tcp More... Parameters for options 1 and 4 or command ===> F3=Exit F4=Prompt F5=Refresh F6=Print list F9=Retrieve F12=Cancel F17=Top F18=Bottom Chapter 5. DRDA and two-phase commitment control 123 Figure 5-31 Protocol tab of the workstation configuration 5. On the TCP/IP tab, fill in the host name of the iSeries server. The port number should be 446. Click Next. 6. On the Database tab, fill in the relational database name, and click Next. 7. If you plan to use the ODBC applications, click the ODBC tab and select Register this database for ODBC as a system data source. 8. Click Finish. 9. Click Test Connection to verify that the connection works. You are prompted for an iSeries server user ID and password, as shown in Figure 5-32. Figure 5-32 Prompt for iSeries server user ID and password 10.Enter your user ID and password, and then click OK. If the connection test passed, a successful message box appears, as shown in Figure 5-33. 124 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 5-33 Message box for successful connection 5.11.3 Consideration Once you complete the configuration in the previous section, you should be able to access iSeries server data. However, you may notice that there are some areas that work differently in the iSeries server and other platforms. These areas are discussed in the following sections. Administrative interface In the current implementation, you cannot perform administrative functions for the iSeries server database through the UDB control center. The best tool for the iSeries server database is Operations Navigator. The differences in the administrative interface are due to the variation of administrative requirements and the operation of the underlying operating system. Several administrative functions are not available by DB2 Universal Database for iSeries because the database manager and operating system automatically handle the tasks. For example, DB2 Universal Database for iSeries doesn't provide a RUNSTATS utility for optimizer statistics because its database manager keeps these statistics current at all times. Likewise, there is no concept of table spaces in DB2 Universal Database for iSeries. DB2 Universal Database for iSeries does not support the notion of independent, isolated databases on the iSeries server. Instead, DB2 Universal Database for iSeries is implemented as a single system-wide database. Journaling in DB2 Universal Database for iSeries DB2 Universal Database for iSeries is so reliable that database administrators may not journal all their tables. However, if you are connecting to an iSeries server database through DB2 Connect, tables must be journaled before the database can be accessed for update. Otherwise, you only have read-only access to the table. If you attempt to update the table without journaling, you would see an error message such as this example: --------------------------- Command entered ---------------------------- insert into result values ('Insert','Insert from UDB/NT command center'); ------------------------------------------------------------------------ DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL7008N REXX variable "RESULT " contains inconsistent data. SQLSTATE=55019 © Copyright IBM Corp. 1994, 1997, 2000, 2001 125 Chapter 6. DB2 Import and Export utilities This chapter covers the following topics:  Explains the CPYFRMIMPF command (Import utility)  Explains the CPYTOIMPF command (Export utility)  Explains how to use the Import and Export utilities from DB2 UDB V7.2 6 126 Advanced Functions and Administration on DB2 Universal Database for iSeries 6.1 Introduction A data loader utility enables the loading of data exported from other database servers into DB2 UDB for iSeries. Two commands in OS/400 are available for users to import (load), and export (unload) data to and from the iSeries server:  Copy From Import File (CPYFRMIMPF): Loads the imported data into the DB2 UDB for iSeries table  Copy To Import File (CPYTOIMPF): Prepares the DB2 UDB for iSeries table data for export from the iSeries server 6.2 DB2 UDB for iSeries Import utility Database tables from heterogenous databases, such as Oracle, Sybase, Microsoft SQL Server, etc., can be ported to DB2 UDB for iSeries tables using this utility. 6.2.1 CPYFRMIMPF The Copy From Import File (CPYFRMIMPF) command is used to load data to DB2 UDB for iSeries after the file is copied to a source file (FROMFILE) and then into a DB2 UDB for iSeries table (TOFILE). This is shown in Figure 6-1. Figure 6-1 Data load flow The following steps summarize a data load from a database table: 1. Create an import file for the data that will be copied to DB2 UDB for iSeries. The format for this data can be in delimited format or fixed format. 2. Send the data to the import file (typically with FTP or Client Access). 3. Create a DB2 UDB for iSeries externally described database file(table) or DDM file that contains the resulting data (target file) of the import file. 4. Create the Field Definition File if a *FIXED data format is used. File from external database Import file created on iSeries server (FROMFILE) Field Definition File (optional) Final DB2 UDB for iSeries (TOFILE) TCP/IP CA/400 ODBC CPYFRMIMPF Chapter 6. DB2 Import and Export utilities 127 5. Use the CPYFRMIMPF command to copy (translate or parse the records) from the import file to the target file. The source file (FROMFILE) The source file (FROMFILE) can be any one of the following file types:  Stream file  DDM file  Tape file  Source physical file  Distributed physical file  Program described physical file  Single format logical file  Externally described physical file with one field (of non-numeric data type) Note: If an externally described physical file has one field, the data type must be CHARACTER, IGC OPEN, IGC EITHER, IGC ONLY, GRAPHIC, or variable length. The file can be copied or imported to the iSeries server using several methods, including:  TCP/IP file transfer (text transfer)  CA/400 support (file transfer, ODBC)  Copy From Tape (CPYFRMTAP) command Sending the data into the import file causes the necessary ASCII to EBCDIC data conversions to occur. The target file (TOFILE) The source file is copied to the database target file, also referred to as the TOFILE. The target file can be any one of the following file types:  Source file  DDM file  Distributed physical file  Program described file  Externally described physical file Data format The data contained in the imported file can be in either the delimiter format or the fixed format:  Character delimited: A delimiter format import file has a series of characters (delimiters) that are used to define where fields begin and end. The parameters of the command defines what characters are used for delimiters.  Fixed format: A fixed format import file uses the user-defined Field Definition File (FDF) that defines the format of the import file. The Field Definition File is used to define where fields begin, end, and are null. The record format of import file (DTAFMT) parameter determines if the source file is delimited (*DLM) or fixed (*FIXED). Field definition file The field definition file to describe fixed formatted files must use the format shown in Table 6-1. 128 Advanced Functions and Administration on DB2 Universal Database for iSeries Table 6-1 Field definition format In reference to Table 6-1, note the following statements:  Field name is the name of the field in the TOFILE. FDF is case sensitive.  The Starting position is the byte in the FROMFILE from where the data is copied.  The Ending position is the byte in the FROMFILE from where the data is copied.  The Null character position is the byte in the FROMFILE that indicates if the field is null. A value of “Y” means the field is null. A value of “N” means the field is not null. If this value is “0”, no null character is provided.  *END is the indicator for the end of the field definition file and must be included. Delimited format import file The import file’s data is interpreted by the following characters and data types for a delimited format import file:  Blanks – All leading and trailing blanks are discarded for character fields unless enclosed by string delimiters. – A field of all blanks is interpreted as a null field for character data. – Blanks cannot be embedded within a numeric field. – A blank cannot be selected as a delimiter.  Null fields A null field is defined as: – Two adjacent field delimiters (no data in between) – A field delimiter followed by a record delimiter (no data in between), an empty string – A field of all blanks If the field is null, the following statement is true: If the output field is not nullable and the import is a null field, the record is not copied, and an error is signaled.  Delimiters – A delimiter cannot be a blank. – A string delimiter cannot be the same as a field delimiter, record delimiter, date separator, or time separator. – A string delimiter can enclose all non-numeric fields (character, date, time, and so on). The string delimiter character should not be contained within the character string. – A field and record delimiter can be the same character. – The defaults for delimiters are as follows: Field name Starting position Ending position Null character position Field1 1 12 13 Field2 14 24 0 Field3 25 55 56 *END Chapter 6. DB2 Import and Export utilities 129 • String: Double quote (") • Field: Comma (,) • Decimal point: Period (.) • Record: End of record (*EOR) – If the data type of the from parameter is CHARACTER, OPEN, EITHER, or ONLY, all double-byte data must be contained within string delimiters or shift characters (for OPEN, EITHER, or ONLY data types).  Numeric field – Numeric fields can be imported in decimal or exponential form. – Data to the right of the decimal point may be truncated depending on the output data format. – Decimal points are either a period or a comma (command option). – Signed numeric fields are supported, + or -.  Character or Varcharacter fields – Fields too large to fit in the output fields are truncated (right), and a diagnostic message is sent. – An empty string is defined as two string delimiters with no data between them. – For a character to be recognized as a starting string delimiter, it must be the first non-blank character in the field. For example, 'abc' with ' as the delimiter is the same as abc for input. – Data after an ending string delimiter and before a field or record delimiter is discarded.  IGC or VarIGC fields – The data from the FROMFILE is copied to the TOFILE, and if any invalid data is received, a mapping error is generated. – Data located between the Shift Out and Shift In characters is treated as double byte data and is not parsed for delimiters. The Shift characters, in this case, become “string delimiters”.  Graphic, VarGraphic fields The data from the FROMFILE is copied to the TOFILE.  CCSIDs (coded character set identifiers) – The data from the FROMFILE is read into a buffer by the CCSID of the FROMFILE. The data in the buffer is checked and written to the TOFILE. The CCSID of the open TOFILE is set to the value of the FROMFILE, unless a TOFILE CCSID is used. If a TOFILE CCSID is used, the data is converted to that CCSID. If the FROMFILE is a tape file, and the FROMCCSID(*FILE) parameter is specified, the job CCSID is used, or the FROMFILE CCSID is requested by the user. – The character data (delimiters) passed in on the command is converted to the CCSID of the FROMFILE. This allows the character data of the FROMFILE and command parameters to be compatible.  Date field – All date formats supported by the iSeries server can be imported, including: *ISO, *USA, *EUR, *JIS, *MDY, *DMY,*YMD, *JUL, and *YYMD. – A date field can be copied to a timestamp field. 130 Advanced Functions and Administration on DB2 Universal Database for iSeries  Time field – All time formats supported by the iSeries server can be imported, including: *ISO, *USA, *EUR, *JIS, and *HMS. – A time field can be copied to a timestamp field.  Date and time separators All valid separators are supported for date and time fields.  Timestamp field Timestamp import fields must be 26 bytes. The import ensures that periods exist in the time portion and a dash exists between the date and time portions of the timestamp.  Number of fields mismatch If the FROMFILE or TOFILE do not have the same number of fields, the data is either truncated to the smaller TOFILE size, or the extra TOFILE fields receives a null value. If the fields are not null capable, an error message is issued. 6.2.2 Data load example (file definition file) A source file, IMPF_TEST, was created using the Create Physical File (CRTPF) command specifying a record length of 258 bytes. The customer data was then transferred to the iSeries server source file using FTP. A sample of the data is shown in Figure 6-2. Figure 6-2 Customer data sample A target file, or TOFILE, was created using Data Definition Specification (DDS) called CUST_IMPF. At this time, the CPYFRMIMPF command can be used to format the delimited file as shown in Figure 6-3. Display Physical File Member File . . . . . . : IMPF_TEST Library . . . . : TPSTAR Member . . . . . : IMPF_TEST Record . . . . . : 1 Control . . . . . Column . . . . . : 1 Find . . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+... 1 ,"Customer#000000001 ",15 Monroe Ave, Chicago IL 60601 2 ,"Customer#000000002 ","Stewartville MN 55976 3 ,"Customer#000000003 ","389 Dexter Pl, Fargo ND 4 ,"Customer#000000004 ","10 N Main, Wausau WI 5 ,"Customer#000000005 ","Bailey Bldg, Bedford Falls 6 ,"Customer#000000006 ","101 Superior St, Duluth MN 7 ,"Customer#000000007 ","1921 N 5th St, St Louis MO 8 ,"Customer#000000008 ","32891 Park Ave, New York NY 9 ,"Customer#000000009 ","1032 S Broadway, Littleton CO 10 ,"Customer#000000010 ","5672 Cobb Pkwy, Bldg 3, Atlanta GA 11 ,"Customer#000000011 ","8192 River Rd, Aurora IL 12 ,"Customer#000000012 ","County Rd 9, Pine Island MN 13 ,"Customer#000000013 ","25th and Main, Appleton WI 14 ,"Customer#000000014 ","2342 Center St, Earlville IL 15 ,"Customer#000000015 ","444 Michigan Ave, Chicago Chapter 6. DB2 Import and Export utilities 131 Figure 6-3 CPYFRMIMPF example A fixed format using a field definition file (FDF) can also be used to convert the data. The example in Figure 6-4 of the FDF, CUST.FDF, was created in the Screen Edit Utility (SEU) as a TEXT file. Figure 6-4 Field definition file The CPYFRMIMPF command for the *FIXED format is shown in Figure 6-5 and Figure 6-6. Copy From Import File (CPYFRMIMPF) Type choices, press Enter. From stream file . . . . . . . . From file: File . . . . . . . . . . . . . IMPF_TEST Name Library . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Member . . . . . . . . . . . . *FIRST Name, *FIRST To data base file: File . . . . . . . . . . . . . CUST_IMPF Name Library . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Member . . . . . . . . . . . . *FIRST Name, *FIRST Replace or add records . . . . . *ADD *ADD, *REPLACE, *UPDADD Stream file record length . . . *TOFILE Number, *TOFILE From CCSID . . . . . . . . . . . *FILE 1-65533, *FILE Record delimiter . . . . . . . . *EOR Character value, *ALL... Record format of import file . . *DLM *DLM, *FIXED String delimiter . . . . . . . . '"' Character value, *NONE More... LEVEL 0 SCREEN CHECK YOUR LEVEL This is screen.Columns . . . : 1 71 Browse V2KEA45/QTXTSRC SEU==> CUST.FDF FMT ** ...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 *************** Beginning of data ************************************* 001.00 CUSTKEY 1 12 0 002.00 CUSTOMER 14 40 0 003.00 ADDRESS 42 83 0 004.00 PHONE 85 101 0 005.00 MKTSEGMENT 103 114 0 006.00 COUNTRY 116 142 0 007.00 CONTINENT 144 170 0 008.00 REGION 172 198 0 009.00 TERRITORY 200 226 0 010.00 SALES00001 228 254 0 011.00 DUMMYKEY 256 258 0 012.00 *END ****************** End of data **************************************** 132 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-5 CPYFRMIMPF Command (Part 1 of 3) Figure 6-6 CPYFRMIMPF command (Part 2 of 3) Copy From Import File (CPYFRMIMPF) Type choices, press Enter. From stream file . . . . . . . . From file: File . . . . . . . . . . . . . > IMPF_TEST Name Library . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Member . . . . . . . . . . . . *FIRST Name, *FIRST To data base file: File . . . . . . . . . . . . . > CUST_IMPF Name Library . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Member . . . . . . . . . . . . *FIRST Name, *FIRST Replace or add records . . . . . *ADD *ADD, *REPLACE, *UPDADD Stream file record length . . . *TOFILE Number, *TOFILE From CCSID . . . . . . . . . . . *FILE 1-65533, *FILE Record delimiter . . . . . . . . > *ALL Character value, *ALL... Record format of import file . . > *FIXED *DLM, *FIXED String delimiter . . . . . . . . '"' Character value, *NONE More... Copy From Import File (CPYFRMIMPF) Type choices, press Enter. Remove leading blanks . . . . . *LEADING *LEADING, *NONE Field delimiter . . . . . . . . ',' Character value, *TAB Field definition file: File . . . . . . . . . . . . . > QTXTSRC Name Library . . . . . . . . . . > TPSTAR Name, *LIBL, *CURLIB Member . . . . . . . . . . . . > CUST_FDF Name, *FIRST Decimal point . . . . . . . . . *PERIOD *PERIOD, *COMMA Date format . . . . . . . . . . *ISO *ISO, *USA, *EUR, *JIS... Date separator . . . . . . . . . '/' /, -, ., ,, *BLANK Time format . . . . . . . . . . *ISO *ISO, *USA, *EUR, *JIS, *HMS Time separator . . . . . . . . . ':' :, ., *BLANK Copy from record number: Copy from record number . . . *FIRST Number, *FIRST Number of records to copy . . *END Number, *END Errors allowed . . . . . . . . . *NOMAX Number, *NOMAX More... Chapter 6. DB2 Import and Export utilities 133 Figure 6-7 CPYFRMIMPF Command (Part 3 of 3) If a field in the source file is not included in the target file, omit the field in the FDF file. Enhancements have been made to the CPYFRMIMPF Import utility by adding the following parameters:  Remove leading blanks (RMVBLANK) – If *LEADING is specified along with STRDLM(*NONE), then DB2 UDB for iSeries strips leading blanks from a character string before placing the resulting string in the specified character column. – With *NONE, all leading blanks are included in the result string that is copied into the specified target character column.  Replace null values (RPLNULLVAL) – When *FLDFT is specified, if the data being imported (for example, blanks in a numeric field) causes DB2 UDB for iSeries to place a null value in a target column that does not allow nulls, then DB2 UDB for iSeries will assign the default value to the target column instead. – When the default value *NO is specified, no replacement of null values is performed. 6.2.3 Data load example (Data Definition Language) The Data Definition Language (DDL) source file STAFF.ddl and the Database extract file STAFFA.csv in comma-separated variable (CSV) format reside on the source system. The database extract file has to be exported to DB2 UDB for iSeries on the target iSeries server AS23. A TCP/IP connection exists between the two systems. Copy From Import File (CPYFRMIMPF) Type choices, press Enter. Error record file: File . . . . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . Name, *LIBL, *CURLIB Member . . . . . . . . . . . . Name, *FIRST Replace or add records . . . . . *ADD *ADD, *REPLACE Replace null values . . . . . . *NO *NO, *FLDDFT Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 134 Advanced Functions and Administration on DB2 Universal Database for iSeries To transfer these two files to the iSeries server, follow these steps: 1. FTP the ddl file and csv file to the iSeries server as shown here: C:\>ftp as23 1 Connected to AS23. 220-QTCP at rchasm23.rchland.ibm.com. 220 Connection will close if idle more than 5 minutes. User (AS23:(none)): vijay 2 331 Enter password. Password: 3 230 VIJAY logged on. ftp> put c:\vijay\staff.ddl vijay/sqlsrc.staff 4 200 PORT subcommand request successful. 150 Sending file to member STAFF in file SQLSRC in library VIJAY. 250 File transfer completed successfully. ftp: 317 bytes sent in 0.00Seconds 317000.00Kbytes/sec. ftp> put c:\vijay\staffa.csv vijay/staffa.staffa 5 200 PORT subcommand request successful. 150 Sending file to member STAFFA in file STAFFA in library VIJAY. 250 File transfer completed successfully. ftp: 2205 bytes sent in 0.01Seconds 220.50Kbytes/sec. ftp>quit 6 Figure 6-8 shows the DDL source imported to create the table STAFFI on the iSeries server. Notes: 1 From a command line, type FTP to the iSeries server AS23. 2 Enter your user ID and press Enter. 3 Type your password and press Enter. 4 Type the PUT sub-command to copy the staff.ddl file in the vijay directory to member STAFF in source physical file SQLSRC in the library VIJAY. Note the use of the forward slash (/) and the period (.) in the target file name (library/file.member) format. 5 Type the PUT sub-command to copy the extracted database file staffa.csv in the vijay directory to the single field physical file member STAFFA in the physical file STAFFA in the library VIJAY. Note the use of the forward slash (/) and the period (.) in the target file name (library/file.member) format. 6 Type QUIT and press Enter to exit the FTP session. Chapter 6. DB2 Import and Export utilities 135 Figure 6-8 Imported DDL for the STAFFA table Figure 6-9 shows the STAFFA data file imported in the CSV data format to the VIJAY library. Columns . . . : 1 71 Browse VIJAY/SQLSRC SEU==> STAFF FMT ** ...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 *************** Beginning of data ************************************* 0001.00 0002.00 CREATE TABLE VIJAY.STAFFI ( 0003.00 ID SMALLINT NOT NULL , 0004.00 NAME VARCHAR(9) CCSID 37 DEFAULT NULL , 0005.00 DEPT SMALLINT DEFAULT NULL , 0006.00 JOB CHAR(5) CCSID 37 DEFAULT NULL , 0007.00 "YEARS" SMALLINT DEFAULT NULL , 0008.00 SALARY DECIMAL(7, 2) DEFAULT NULL , 0009.00 COMM DECIMAL(7, 2) DEFAULT NULL 0010.00 ); 0011.00 ****************** End of data **************************************** F3=Exit F5=Refresh F9=Retrieve F10=Cursor F11=Toggle F12=Cancel F16=Repeat find F24=More keys (C) COPYRIGHT IBM CORP. 1981, 2000. 136 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-9 Imported STAFFA.csv data file 2. Use the Run SQL Statement (RUNSQLSTM) command from the iSeries server command line to use the DDL source to create the STAFFI table in library VIJAY: RUNSQLSTM SRCFILE(VIJAY/SQLSRC) SRCMBR(STAFF) COMMIT(*NONE) NAMING(*SQL) This command has the *SQL naming convention specified as used in the DDL in Figure 6-8. Also the COMMIT parameter is specified with the value *NONE as because just want to create the table in library VIJAY and do not plan to use commitment control. 3. The last step imports the STAFFA file in the CSV data format using the CPYFRMIMPF command. It also includes populating the STAFFI table in the library VIJAY. Type the following command from a command line and press Enter to accept the defaults for string a delimiter and field separator: CPYFRMIMPF FROMFILE(VIJAY/STAFFA) TOFILE(VIJAY/STAFFI) MBROPT(*REPLACE) Member STAFFI file STAFFI in VIJAY cleared. 35 records copied from member STAFFA. Use the RUNQRY command to look at the data in table STAFFI: RUNQRY *N VIJAY/STAFFI *...+....1....+....2....+....3....+....4....+....5....+....6. 10 ,"Sanders ",20 ,"Mgr ",7 ,18357.50 ,300.00 20 ,"Pernal ",20 ,"Sales",8 ,18171.25 ,1112.45 30 ,"Marenghi ",38 ,"Mgr ",5 ,17506.75 ,500.00 40 ,"O'Brien ",38 ,"Sales",6 ,18006.00 ,846.55 50 ,"Hanes ",15 ,"Mgr ",10 ,20659.80 ,.00 60 ,"Quigley ",38 ,"Sales",0 ,16808.30 ,650.25 70 ,"Rothman ",15 ,"Sales",7 ,16502.83 ,1152.00 80 ,"James ",20 ,"Clerk",0 ,13504.60 ,128.20 90 ,"Koonitz ",42 ,"Sales",6 ,18001.75 ,1386.70 100 ,"Plotz ",42 ,"Mgr ",7 ,18352.80 ,.00 110 ,"Ngan ",15 ,"Clerk",5 ,12508.20 ,206.60 120 ,"Naughton ",38 ,"Clerk",0 ,12954.75 ,180.00 130 ,"Yamaguchi",42 ,"Clerk",6 ,10505.90 ,75.60 140 ,"Fraye ",51 ,"Mgr ",6 ,21150.00 ,.00 150 ,"Williams ",51 ,"Sales",6 ,19456.50 ,637.65 160 ,"Molinare ",10 ,"Mgr ",7 ,22959.20 ,.00 170 ,"Kermisch ",15 ,"Clerk",4 ,12258.50 ,110.10 180 ,"Abrahams ",38 ,"Clerk",3 ,12009.75 ,236.50 190 ,"Sneider ",20 ,"Clerk",8 ,14252.75 ,126.50 200 ,"Scoutten ",42 ,"Clerk",0 ,11508.60 ,84.20 210 ,"Lu ",10 ,"Mgr ",10 ,20010.00 ,.00 220 ,"Smith ",51 ,"Sales",7 ,17654.50 ,992.80 230 ,"Lundquist",51 ,"Clerk",3 ,13369.80 ,189.65 240 ,"Daniels ",10 ,"Mgr ",5 ,19260.25 ,.00 250 ,"Wheeler ",51 ,"Clerk",6 ,14460.00 ,513.30 260 ,"Jones ",10 ,"Mgr ",12 ,21234.00 ,.00 270 ,"Lea ",66 ,"Mgr ",9 ,18555.50 ,.00 280 ,"Wilson ",66 ,"Sales",9 ,18674.50 ,811.50 290 ,"Quill ",84 ,"Mgr ",10 ,19818.00 ,.00 300 ,"Davis ",84 ,"Sales",5 ,15454.50 ,806.10 310 ,"Graham ",66 ,"Sales",13 ,21000.00 ,200.30 320 ,"Gonzales ",66 ,"Sales",4 ,16858.20 ,844.00 330 ,"Burke ",66 ,"Clerk",1 ,10988.00 ,55.50 340 ,"Edwards ",84 ,"Sales",7 ,17844.00 ,1285.00 350 ,"Gafney ",84 ,"Clerk",5 ,13030.50 ,188.00 Chapter 6. DB2 Import and Export utilities 137 This command produces the report shown in Figure 6-10. Figure 6-10 RUNQRY report for the STAFFI table 6.2.4 Parallel data loader When using the CPYFRMIMPF command, you can take advantage of loading data into DB2 Universal Database for iSeries in parallel when the DB2 UDB for iSeries Symmetric Multiprocessing (SMP) licensed feature of OS/400 is installed and activated on the iSeries server to activate parallelism. DB2 UDB for iSeries uses multiple tasks to load the large file. The command breaks the file into blocks and submits the blocks in parallel; the entire file is processed at the same time. The number of tasks used during the copy is determined by DEGREE(*NBRTASKS) on the Change Query Attributes (CHGQRYA) command. Table 6-2, based on the results of performance testing by the Teraplex Center using a 12-processor iSeries server, shows the advantages of parallel processing. The test used an import file containing 350 million rows with 100 GB of data. Table 6-2 Parallel data load Display Report Report width . . . . . : 64 Position to line . . . . . Shift to column . . . . . . Line ....+....1....+....2....+....3....+....4....+....5....+....6.... ID NAME DEPT JOB YEARS SALARY COMM 000001 10 Sanders 20 Mgr 7 18,357.50 300.00 000002 20 Pernal 20 Sales 8 18,171.25 1,112.45 000003 30 Marenghi 38 Mgr 5 17,506.75 500.00 000004 40 O'Brien 38 Sales 6 18,006.00 846.55 000005 50 Hanes 15 Mgr 10 20,659.80 .00 000006 60 Quigley 38 Sales 0 16,808.30 650.25 000007 70 Rothman 15 Sales 7 16,502.83 1,152.00 000008 80 James 20 Clerk 0 13,504.60 128.20 000009 90 Koonitz 42 Sales 6 18,001.75 1,386.70 000010 100 Plotz 42 Mgr 7 18,352.80 .00 000011 110 Ngan 15 Clerk 5 12,508.20 206.60 000012 120 Naughton 38 Clerk 0 12,954.75 180.00 000013 130 Yamaguchi 42 Clerk 6 10,505.90 75.60 000014 140 Fraye 51 Mgr 6 21,150.00 .00 000015 150 Williams 51 Sales 6 19,456.50 637.65 000016 160 Molinare 10 Mgr 7 22,959.20 .00 More... F3=Exit F12=Cancel F19=Left F20=Right F21=Split Load time Degree of parallel processing 47+ hours 1 4+ hours 12 138 Advanced Functions and Administration on DB2 Universal Database for iSeries 6.3 DB2 UDB for iSeries Export utility DB2 UDB for iSeries tables can be exported into a flat file with the CPYTOIMPF CL command (single threaded only). 6.3.1 CPYTOIMPF The Copy To Import File (CPYTOIMPF) command is used to export data from a DB2 UDB for iSeries table to either a source physical file or a stream file. The command copies an externally defined file to an import file; the term import file is used to describe a file created for the purpose of copying data between heterogenous databases. The import file (TOSTMF or TOFILE parameter of the command) can be sent to an external system using FTP or Client Access. The export data flow is shown in Figure 6-11. Figure 6-11 Data export flow The following steps summarize a data load from a database file: 1. Create an import file (TOFILE) for the data that will be copied to the external system. The format for this data can be in delimited format or fixed format. 2. Use the CPYTOIMPF command to copy (translate or parse the records) from the source DB2 UDB for iSeries file (FROMFILE) to the import file (TOFILE). 3. Send the data in the import file (typically with FTP or Client Access) to the external system. The source file (FROMFILE) The source file (FROMFILE) can be any one of the following file types:  Source physical file  Distributed physical file  Single format logical file  Externally described physical file with one field (of non-numeric data type) Note: If an externally described physical file has one field, the data type must be CHARACTER, IGC OPEN, IGC EITHER, IGC ONLY, GRAPHIC, or variable length. CPYTOIMPF Stream file or source file FTP or Client Access DB2 UDB for iSeries table External system Chapter 6. DB2 Import and Export utilities 139 The file can be copied or exported from the iSeries server using several methods, including:  TCP/IP file transfer (text transfer)  CA/400 support (file transfer, ODBC)  Copy To Tape (CPYTOTAP) command Sending the data into the import file causes the necessary EBCDIC to ASCII data conversions to occur. The target file (TOFILE) The source file (FROMFILE) is copied to the import file, also referred to as the TOFILE or TOSTMF. The import file can be any one of the following file types:  Parameter TOFILE – Source physical file – If the file is not a source file, the file can have only one field; the field of the file cannot be a numeric data type – Program described physical file – Externally described physical file that can have only one field; the field of the file cannot be a numeric data type  Parameter TOSTMF Specifies the path name of the output stream file to which the source file is copied. Data format (DTAFMT) The data can be copied to the TOFILE as either delimiter format or fixed format:  Delimited format (*DLM): A series of characters as delimiters to define where strings, fields, and records begin and end. – Delimiters cannot be blank. – A period cannot be a string delimiter. – A string delimiter cannot be the same as a field or record delimiter. – The defaults for delimiters are: • String: Double quote (“) • Field: Comma (,) • Record: End of record (*EOR)  Fixed format (*FIXED): Each field of the file is copied without delimiters – The NULLS parameter can only have the value *YES if DTAFMT(*FIXED) is used. This places either a “Y” or “N” after field data indicating if the field is null or not null. – The NULLS parameter can also have the default value *NO. This does not place a “Y” or “N” after field data. – A field definition file is not needed. Additional function has been added to the following parameters of the CPYTOIMPF command:  Stream file code page (STMCODPAG) Allows you to specify the code page of the target stream file. In the past, you would use another tool or command to first create the stream file with the desired code page to override the default behavior of the command. 140 Advanced Functions and Administration on DB2 Universal Database for iSeries  Replace or add records (MBROPT) If the CPYTOIMPF command is given an empty database table to copy, then DB2 UDB for iSeries now clears the target stream file when MBROPT(*REPLACE) is specified. 6.3.2 Creating the import file (TOFILE) This section shows the creation of the TOFILE using the *FIXED and *DLM data formats. The DB2 UDB for iSeries STAFF table in the library VIJAY is exported to another database server along with the DDL source for the file from the member STAFF in the SQLSRC source physical file in the library VIJAY: 1. Use the CRTPF command to create a single field physical file with a record length of 72 bytes: CRTPF FILE(VIJAY/PF72) RCDLEN(72) MAXMBRS(*NOMAX) 2. Use the RMVM command to remove the PF72 member from the PF72 file: RMVM FILE(VIJAY/PF72) MBR(PF72) More members are added to the file PF72 as we use the Export utility to create the import file (TOFILE). 3. Use the CPYTOIMPF command to copy the STAFF file and add the STAFFNLN member to the file PF72; the command specifies the DTAFMT(*FIXED) and NULLS(*NO) parameters as shown here: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOFILE(VIJAY/PF72 STAFFNLN) MBROPT(*REPLACE) DTAFMT(*FIXED) NULLIND(*NO) Figure 6-12 shows a partial list of the resulting member STAFFNLN. Figure 6-12 TOFILE with DTAFMT(*FIXED) NULLS(*NO) Display Physical File Member File . . . . . . : PF72 Library . . . . : VIJAY Member . . . . . : STAFFNLN Record . . . . . : 1 Control . . . . . Column . . . . . : 1 Find . . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.. 10 Sanders 20 Mgr 7 18357.50 300.00 20 Pernal 20 Sales8 18171.25 1412.45 30 Marenghi 38 Mgr 5 17506.75 500.00 40 O'Brien 38 Sales6 18006.00 846.55 50 Hanes 15 Mgr 10 20659.80 0.0 60 Quigley 38 Sales0 16808.30 650.25 70 Rothman 15 Sales7 16502.83 1152.00 80 James 20 Clerk0 13504.60 128.20 90 Koonitz 42 Sales6 18001.75 1386.70 100 Plotz 42 Mgr 7 18352.80 0.0 110 Ngan 15 Clerk5 12508.20 206.60 120 Naughton 38 Clerk0 12954.75 180.00 130 Yamaguchi42 Clerk6 10505.90 75.60 140 Fraye 51 Mgr 6 21150.00 0.0 150 Williams 51 Sales6 19456.50 637.65 More... F3=Exit F12=Cancel F19=Left F20=Right F24=More keys Chapter 6. DB2 Import and Export utilities 141 4. Use the CPYTOIMPF command to copy the STAFF file and add the STAFFNLY member to the file PF72; the command specifies the DTAFMT(*FIXED) and NULLS(*YES) parameters as shown here: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOFILE(VIJAY/PF72 STAFFNLY) MBROPT(*REPLACE) DTAFMT(*FIXED) NULLIND(*YES) Figure 6-13 shows a partial list of the resulting member STAFFNLY; notice the “Y” after each field that has a null value and “N” after each field that does not have a null value. Figure 6-13 TOFILE with DTAFMT(*FIXED) NULLS(*YES) 5. Use the CPYTOIMPF command to copy the STAFF file and add the STAFFDLM member to the file PF72; the command specifies the DTAFMT(*DLM) parameter and uses the default delimiter values as follows: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOFILE(VIJAY/PF72 STAFFDLM) MBROPT(*REPLA CE) Figure 6-14 shows a partial list of the resulting STAFFDLM member; this member shows the data ready for export in the most common CSV format that is used to port data between heterogenous databases. Display Physical File Member File . . . . . . : PF72 Library . . . . : VIJAY Member . . . . . : STAFFNLY Record . . . . . : 1 Control . . . . . Column . . . . . : 1 Find . . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.. 10 NSanders N20 NMgr N7 N18357.50 N300.00 N 20 NPernal N20 NSalesN8 N18171.25 N1412.45 N 30 NMarenghi N38 NMgr N5 N17506.75 N500.00 N 40 NO'Brien N38 NSalesN6 N18006.00 N846.55 N 50 NHanes N15 NMgr N10 N20659.80 N0.0 Y 60 NQuigley N38 NSalesN0 Y16808.30 N650.25 N 70 NRothman N15 NSalesN7 N16502.83 N1152.00 N 80 NJames N20 NClerkN0 Y13504.60 N128.20 N 90 NKoonitz N42 NSalesN6 N18001.75 N1386.70 N 100 NPlotz N42 NMgr N7 N18352.80 N0.0 Y 110 NNgan N15 NClerkN5 N12508.20 N206.60 N 120 NNaughton N38 NClerkN0 Y12954.75 N180.00 N 130 NYamaguchiN42 NClerkN6 N10505.90 N75.60 N 140 NFraye N51 NMgr N6 N21150.00 N0.0 Y 150 NWilliams N51 NSalesN6 N19456.50 N637.65 N More... F3=Exit F12=Cancel F19=Left F20=Right F24=More keys 142 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-14 TOFILE with DTAFMT(*DLM) 6. Use the CPYTOIMPF command to copy the DDL source from the STAFF member in the SQLSRC source physical file in the VIJAY library and add the STAFFDDL member to the file PF72; the command specifies the DTAFMT(*FIXED) parameter as shown here: CPYTOIMPF FROMFILE(VIJAY/SQLSRC STAFF) TOFILE(VIJAY/PF72 STAFFDDL) MBROPT(*REPLACE) DTAFMT(*FIXED) Figure 6-15 shows a partial list of the resulting member STAFFDLM. Display Physical File Member File . . . . . . : PF72 Library . . . . : VIJAY Member . . . . . : STAFFDLM Record . . . . . : 1 Control . . . . . Column . . . . . : 1 Find . . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.. 10 ,"Sanders ",20 ,"Mgr ",7 ,18357.50 ,300.00 20 ,"Pernal ",20 ,"Sales",8 ,18171.25 ,1412.45 30 ,"Marenghi ",38 ,"Mgr ",5 ,17506.75 ,500.00 40 ,"O'Brien ",38 ,"Sales",6 ,18006.00 ,846.55 50 ,"Hanes ",15 ,"Mgr ",10 ,20659.80 ,, 60 ,"Quigley ",38 ,"Sales",,16808.30 ,650.25 70 ,"Rothman ",15 ,"Sales",7 ,16502.83 ,1152.00 80 ,"James ",20 ,"Clerk",,13504.60 ,128.20 90 ,"Koonitz ",42 ,"Sales",6 ,18001.75 ,1386.70 100 ,"Plotz ",42 ,"Mgr ",7 ,18352.80 ,, 110 ,"Ngan ",15 ,"Clerk",5 ,12508.20 ,206.60 120 ,"Naughton ",38 ,"Clerk",,12954.75 ,180.00 130 ,"Yamaguchi",42 ,"Clerk",6 ,10505.90 ,75.60 140 ,"Fraye ",51 ,"Mgr ",6 ,21150.00 ,, 150 ,"Williams ",51 ,"Sales",6 ,19456.50 ,637.65 More... F3=Exit F12=Cancel F19=Left F20=Right F24=More keys Chapter 6. DB2 Import and Export utilities 143 Figure 6-15 TOFILE for DDL source 6.3.3 Exporting the TOFILE The members STAFFDDL and STAFFDLM in the import file PF72 in the VIJAY library are exported to the external system. A TCP/IP connection exists between the iSeries server and the external database server. FTP is used for the data transfer between the two database servers. The FTP dialogue from the external system is shown here: C:\>ftp as23 1 Connected to AS23. 220-QTCP at rchasm23.rchland.ibm.com. 220 Connection will close if idle more than 5 minutes. User (AS23:(none)): vijay 2 331 Enter password. Password: 3 230 VIJAY logged on. ftp> get vijay/pf72.staffddl c:\vijay\staff.ddl 4 200 PORT subcommand request successful. 150 Retrieving member STAFFDDL in file PF72 in library VIJAY. 250 File transfer completed successfully. ftp: 305 bytes received in 0.00Seconds 305000.00Kbytes/sec. ftp> get vijay/pf72.staffdlm c:\vijay\staff.csv 5 200 PORT subcommand request successful. 150 Retrieving member STAFFDLM in file PF72 in library VIJAY. 250 File transfer completed successfully. ftp: 2136 bytes received in 0.00Seconds 2136000.00Kbytes/sec. ftp> quit 6 Display Physical File Member File . . . . . . : PF72 Library . . . . : VIJAY Member . . . . . : STAFFDDL Record . . . . . : 1 Control . . . . . Column . . . . . : 1 Find . . . . . . . *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.. CREATE TABLE VIJAY.STAFFI ( ID SMALLINT NOT NULL , NAME VARCHAR(9) CCSID 37 DEFAULT NULL , DEPT SMALLINT DEFAULT NULL , JOB CHAR(5) CCSID 37 DEFAULT NULL , "YEARS" SMALLINT DEFAULT NULL , SALARY DECIMAL(7, 2) DEFAULT NULL , COMM DECIMAL(7, 2) DEFAULT NULL ); ****** END OF DATA ****** Bottom F3=Exit F12=Cancel F19=Left F20=Right F24=More keys 144 Advanced Functions and Administration on DB2 Universal Database for iSeries 6.3.4 Creating the import file (STMF) This section shows the creation of the STMF import file in the integrated file system (IFS) on the iSeries server using the *FIXED and *DLM data formats. The DB2 UDB for iSeries file STAFF in library VIJAY are exported to another database server along with the DDL source for the file from the member STAFF in source physical file SQLSRC in library VIJAY. 1. Use the make directory command to create the vijay directory in the integrated file system: md vijay 2. Use the CPYTOIMPF command to copy the STAFF file and add the staffnln.txt stream file to the vijay directory; the command specifies the DTAFMT(*FIXED) and NULLS(*NO) parameters as shown here: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOSTMF('/vijay/staffnln.txt') MBROPT(*REPLACE) RCDDLM(*LF) DTAFMT(*FIXED) Figure 6-16 shows a partial list of the resulting staffnln.txt stream file. Notes: 1 From a command line, type FTP to the iSeries server AS23. 2 Enter your user ID and press Enter. 3 Type your password and press Enter. 4 Type the GET sub-command to copy the STAFFDDL member in the file PF72 in the VIJAY library to the staff.ddl file in the vijay directory. Note the use of the forward slash (/) and the period (.) in the target file name (library/file.member) format. 5 Type the GET sub-command to copy the STAFFDLM member in the PF72 file in the VIJAY library to the staff.csv file in the vijay directory. Note the use of the forward slash (/) and the period (.) in the target file name (library/file.member) format. 6 Type QUIT and press Enter to exit the FTP session to iSeries server AS23. Chapter 6. DB2 Import and Export utilities 145 Figure 6-16 STMF DTAFMT(*FIXED) NULLS(*NO) 3. Use the CPYTOIMPF command to copy the STAFF file and add the staffnly.txt stream file to the vijay directory; the command specifies the DTAFMT(*FIXED) and NULLS(*YES) parameters as shown here: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOSTMF('/vijay/staffnly.txt') MBROPT(*REPLACE) RCDDLM(*LF) DTAFMT(*FIXED) NULLS(*YES) Figure 6-17 shows a partial list of the resulting staffnly.txt stream file. Browse : /vijay/staffnln.txt Record : 1 of 36 by 14 Column : 1 63 by 79 Control : ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** 10 Sanders 20 Mgr 7 18357.50 300.00 20 Pernal 20 Sales8 18171.25 1412.45 30 Marenghi 38 Mgr 5 17506.75 500.00 40 O'Brien 38 Sales6 18006.00 846.55 50 Hanes 15 Mgr 10 20659.80 0.0 60 Quigley 38 Sales0 16808.30 650.25 70 Rothman 15 Sales7 16502.83 1152.00 80 James 20 Clerk0 13504.60 128.20 90 Koonitz 42 Sales6 18001.75 1386.70 100 Plotz 42 Mgr 7 18352.80 0.0 110 Ngan 15 Clerk5 12508.20 206.60 120 Naughton 38 Clerk0 12954.75 180.00 130 Yamaguchi42 Clerk6 10505.90 75.60 140 Fraye 51 Mgr 6 21150.00 0.0 F3=Exit F10=Display Hex F12=Exit F15=Services F16=Repeat find F19=Left F20=Right (C) COPYRIGHT IBM CORP. 1980, 2000. 146 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-17 STMF DTAFMT(*FIXED) NULLS(*YES) 4. Use the CPYTOIMPF command to copy the STAFF file and add the staffdlm.csv stream file to the vijay directory; the command specifies the DTAFMT(*DLM) parameter and uses the default delimiters as shown here: CPYTOIMPF FROMFILE(VIJAY/STAFF) TOSTMF('/vijay/staffdlm.csv') MBROPT(*REPLACE) RCDDLM(*LF) Figure 6-18 shows a partial list of the resulting staffdlm.csv stream file. This stream file shows the data ready for export in the most common CSV format that is used to port data between heterogenous databases. Browse : /vijay/staffnly.txt Record : 1 of 36 by 14 Column : 1 72 by 79 Control : ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** 10 NSanders N20 NMgr N7 N18357.50 N300.00 N 20 NPernal N20 NSalesN8 N18171.25 N1412.45 N 30 NMarenghi N38 NMgr N5 N17506.75 N500.00 N 40 NO'Brien N38 NSalesN6 N18006.00 N846.55 N 50 NHanes N15 NMgr N10 N20659.80 N0.0 Y 60 NQuigley N38 NSalesN0 Y16808.30 N650.25 N 70 NRothman N15 NSalesN7 N16502.83 N1152.00 N 80 NJames N20 NClerkN0 Y13504.60 N128.20 N 90 NKoonitz N42 NSalesN6 N18001.75 N1386.70 N 100 NPlotz N42 NMgr N7 N18352.80 N0.0 Y 110 NNgan N15 NClerkN5 N12508.20 N206.60 N 120 NNaughton N38 NClerkN0 Y12954.75 N180.00 N 130 NYamaguchiN42 NClerkN6 N10505.90 N75.60 N 140 NFraye N51 NMgr N6 N21150.00 N0.0 Y F3=Exit F10=Display Hex F12=Exit F15=Services F16=Repeat find F19=Left F20=Right (C) COPYRIGHT IBM CORP. 1980, 2000. Chapter 6. DB2 Import and Export utilities 147 Figure 6-18 STMF DTAFMT(*DLM) 5. Use the CPYTOIMPF command to copy the DDL source from the STAFF member in the SQLSRC source physical file in the VIJAY library and add the staff.ddl stream file to the vijay directory; the command specifies the DTAFMT(*FIXED) parameter as shown here: CPYTOIMPF FROMFILE(VIJAY/SQLSRC STAFF) TOSTMF('/vijay/staff.ddl') MBROPT(*REPLACE) RCDDLM(*LF) DTAFMT(*FIXED) Figure 6-19 shows a list of the resulting staff.ddl stream file. Browse : /vijay/staffdlm.csv Record : 1 of 36 by 14 Column : 1 59 by 79 Control : ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** 10 ,"Sanders ",20 ,"Mgr ",7 ,18357.50 ,300.00 20 ,"Pernal ",20 ,"Sales",8 ,18171.25 ,1412.45 30 ,"Marenghi ",38 ,"Mgr ",5 ,17506.75 ,500.00 40 ,"O'Brien ",38 ,"Sales",6 ,18006.00 ,846.55 50 ,"Hanes ",15 ,"Mgr ",10 ,20659.80 ,, 60 ,"Quigley ",38 ,"Sales",,16808.30 ,650.25 70 ,"Rothman ",15 ,"Sales",7 ,16502.83 ,1152.00 80 ,"James ",20 ,"Clerk",,13504.60 ,128.20 90 ,"Koonitz ",42 ,"Sales",6 ,18001.75 ,1386.70 100 ,"Plotz ",42 ,"Mgr ",7 ,18352.80 ,, 110 ,"Ngan ",15 ,"Clerk",5 ,12508.20 ,206.60 120 ,"Naughton ",38 ,"Clerk",,12954.75 ,180.00 130 ,"Yamaguchi",42 ,"Clerk",6 ,10505.90 ,75.60 140 ,"Fraye ",51 ,"Mgr ",6 ,21150.00 ,, F3=Exit F10=Display Hex F12=Exit F15=Services F16=Repeat find F19=Left F20=Right (C) COPYRIGHT IBM CORP. 1980, 2000. 148 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-19 STMF for DDL source 6.3.5 Exporting the STMF The stream files staff.ddl and staffdlm.csv in the vijay directory in the iSeries server Integrated File System are exported to the external system. A TCP/IP connection exists between the iSeries server and the external database server. FTP is used for the data transfer between the two database servers. The FTP dialogue from the external system is shown here: C:\>ftp as23 1 Connected to AS23. 220-QTCP at rchasm23.rchland.ibm.com. 220 Connection will close if idle more than 5 minutes. User (AS23:(none)): vijay 2 331 Enter password. Password: 3 230 VIJAY logged on. ftp> get /vijay/staff.ddl c:\vijay\staff.ddl 4 200 PORT subcommand request successful. 150-NAMEFMT set to 1. 150 Retrieving file /vijay/staff.ddl 250 File transfer completed successfully. ftp: 294 bytes received in 0.00Seconds 294000.00Kbytes/sec. ftp> get /vijay/staffdlm.csv c:\vijay\staff.csv 5 200 PORT subcommand request successful. 150 Retrieving file /vijay/staffdlm.csv 250 File transfer completed successfully. ftp: 1987 bytes received in 0.03Seconds 66.23Kbytes/sec. ftp> quit 6 Browse : /vijay/staff.ddl Record : 1 of 11 by 14 Column : 1 59 by 79 Control : ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** CREATE TABLE VIJAY.STAFFI ( ID SMALLINT NOT NULL , NAME VARCHAR(9) CCSID 37 DEFAULT NULL , DEPT SMALLINT DEFAULT NULL , JOB CHAR(5) CCSID 37 DEFAULT NULL , "YEARS" SMALLINT DEFAULT NULL , SALARY DECIMAL(7, 2) DEFAULT NULL , COMM DECIMAL(7, 2) DEFAULT NULL ); ************End of Data******************** F3=Exit F10=Display Hex F12=Exit F15=Services F16=Repeat find F19=Left F20=Right (C) COPYRIGHT IBM CORP. 1980, 2000. Chapter 6. DB2 Import and Export utilities 149 6.4 Moving data from DB2 UDB 7.2 to DB2 UDB for iSeries This section illustrates two approaches for moving data, among many other valid options. The first approach is based exclusively on the Import and Export utilities in a very direct way. The second approach combines the Export utility with the CPYFRMIMPF CL command for better performance. 6.4.1 First approach: Using the Export and Import utilities In the first approach we found a very direct way to move data that was also very friendly to users already familiarized with DB2 UDB 7.2: 1. Before starting, define the target iSeries database in DB2 UDB V7.2. 2. Use the DB2 UDB Export utility for exporting data from DB2 UDB V7.2 to an integrated exchange file (IXF). 3. Use the DB2 UDB V7.2 Command Center or other DB2 UDF V7.2 SQL interface to connect to DB2 Universal Database for iSeries. 4. Use the Import utility for importing the IXF file into a predefined target table. The Export utility can be used to export DB2 UDB V7.2 information into an operating system file in one of the following formats:  Integrated exchange file (IXF): A data format explicitly designed for exchanging relational data. IXF files are the preferred file format for exchanging information between DB2 UDB V7.2 databases.  Delimited ASCII file (DEL): A very popular family of flat files for exchanging information, in which text fields are enclosed by quotations, fields are separated by commas, and decimals are delimited by a point. Those delimiter characters can be changed if needed. Delimited ASCII files are very popular also for exchanging information among personal productivity tools, such as spreadsheets, where they are known as CSV files.  Worksheet format file (WSF): Highly used for exchanging information between spreadsheets and other tools, including relational databases. In order to use the export utility, you can use the interactive Control Center interface or a DB2 UDB V7.2 SQL interface such as Command Center or DB2CMD. Exporting relational data using the Control Center From the Control Center, expand the object tree until you find the Tables or Views folder. Then right-click the table or view you want in the contents pane. Select Export from the pop-up menu, as shown in Figure 6-20. Notes: 1 From a command line, type FTP to the iSeries server AS23. 2 Enter your user ID and press Enter. 3 Type your password and press Enter. 4 Type the GET sub-command to copy the staff.ddl stream file in the vijay directory in the integrated file system to the staff.ddl file in the vijay directory. 5 Type the GET sub-command to copy the staffdlm.csv stream file in the vijay directory in the integrated file system to the staff.csv file in the vijay directory. 6 Type QUIT and press Enter to exit the FTP session to the iSeries server AS23. 150 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 6-20 Starting Control Center’s Export notebook This takes you to the Export notebook. Follow the instructions for providing necessary information as output file, output file format, message file, select sentence, etc. as shown in Figure 6-21. Figure 6-21 Export notebook You can choose between running the Export command immediately by clicking the OK button or reviewing the export command by clicking the Show Command, as shown in Figure 6-22. Chapter 6. DB2 Import and Export utilities 151 Figure 6-22 Export command as generated by the Control Center For general information about the Control Center, you can find detailed information in online help facility within the Control Center. Exporting relational information using the Export command You can also use the Export command from an SQL interface, such as DB2CMD, which is a very practical approach when you need to export data periodically, because it lets you create a batch file. An example of the Export command issued through the CLP is shown here: db2 export to staff.ixf of ixf select * from userid.staff Exporting relational information using Export API You can also use the provided export application programming interface (API), sqluexpr. For general information about creating applications containing DB2 administrative APIs, see DB2 UDB Application Development Guide V6, SC09-2845. Importing an IXF file into DB2 UDB for iSeries From a DB2 UDB V7.2 SQL interface, you connect to the target DB2 UDB for iSeries database using the following command: CONNECT TO RCHASM23 USER DLEMA USING MYPASS Then you use the following Import command to import data into the target table: IMPORT FROM SOURCE_IXF_FILE.IXF OF IXF INSERT INTO DLEMA.DESTTABLE Here SOURCE_IXF_FILE.IXF is the IXF file located in the workstation where the SQL interface is being used. The Import utility in DB2 UDB for iSeries V5R1 is limited to importing IXF files into existing tables. All in a batch As a convenient way to move multiple tables in a periodic way, you can create a batch file as shown in the following example: -- CONNECTION TO DB2 UDB 7.2 SOURCE DATABASE CONNECT TO NWS_DWH USER MYUSER USING MYPASWRD; -- EXPORTING DATA INTO IXF FILES Important: A destination table with compatible columns must exist previously, and it must be journaled. If it is not, you will receive an SQL error -7008. 152 Advanced Functions and Administration on DB2 Universal Database for iSeries EXPORT TO C:\TEMP\ASS.IXF OF IXF SELECT * FROM DB2ADMIN.ASS WHERE END_DT IS NULL; EXPORT TO C:\TEMP\ACT.IXF OF IXF SELECT * FROM DB2ADMIN.ACT WHERE ACT_TS > ‘2001-01-01 00:00:00.000000’; EXPORT TO C:\TEMP\USR.IXF OF IXF SELECT USR_IP_ID, NM, DEPT_OU_ID, CC_OU_ID, BLD_ID FROM DB2ADMIN.USR; -- AND MANY MORE... -- -- CONNECTION TO DB2 UDB FOR ISERIES CONNECT TO RCHASM23 USER AS400USR USING AS400PWD; -- IMPORTING IXF DATA INTO ISERIES SERVER IMPORT FROM C:\TEMP\ASS.IXF OF IXF INSERT INTO DLEMA.ASS; IMPORT FROM C:\TEMP\ACT.IXF OF IXF INSERT INTO DLEMA.ACT; IMPORT FROM C:\TEMP\USR.IXF OF IXF INSERT INTO DLEMA.USR; You run this batch file with the following DB2 UDB V7.2 command: db2cmd -w -c db2 -f c:\temp\export_import.sql -z c:\temp\export_import.log -t Here c:\temp\export_import.sql is the batch file and c:\temp\export_import.log is a text file, where the informational, warning, and error conditions will be stored for review in case of failure. 6.4.2 Second approach: Using Export and CPYFRMIMPF The second approach is more appropriate for moving large data sets because it performs better: 1. Use the DB2 UDB Export utility for exporting data from DB2 UDB V7.2 to a DEL file. 2. Move the DEL file to the iSeries server. 3. Use the CPYFRMIMPF CL command to load the DEL file into the target table. Using Export utility for exporting data to a DEL file You can use the Export utility from the Control Center as shown in “Exporting relational data using the Control Center” on page 149, but export to a delimited ASCII file. You can also use an interactive or batch command interface for executing a sentence as shown in the following example: EXPORT TO C:\TEMP\FILE.DEL OF DEL SELECT * FROM SAMPLEDB02.STAFF The resulting delimited ASCII file can now be loaded into the iSeries server using FTP and then loaded into the target table using the CPYFRMIMPF command, as explained in 6.2, “DB2 UDB for iSeries Import utility” on page 126. 6.5 Moving data from DB2 UDB for iSeries into DB2 UDB 7.2 You can move data from DB2 UDB for iSeries into DB2 UDB V7.2 using the same two approaches shown in 6.4, “Moving data from DB2 UDB 7.2 to DB2 UDB for iSeries” on page 149. 6.5.1 Using the Import and Export utilities Using any SQL interface to DB2 UDB V7.2 such as Command Center or DB2CMD, you can connect to the source iSeries server database and use the Export command to export any table to an IXF file, as shown here: CONNECT TO RCHASM23 USER AS400USR USING AS400PWD; EXPORT TO C:\TEMP\IXF_FILE.IXF OF IXF SELECT * FROM SCHEMA.TABLE Chapter 6. DB2 Import and Export utilities 153 Here, c:\temp\ixf_file.ixf is the target file on the workstation in which you execute the command. After you export the iSeries server data into an IXF file, you can import into DB2 UDB V7.2 using the Import utility. In this case, you have some extra functionality, like importing into a new table as shown in the following example: IMPORT FROM C:\TEMP\IXF_FILE OF IXF CREATE INTO DB2ADMIN.TARGETTABLE Here, c:\temp\ixf_file is the source IXF file, and DB2ADMIN.TARGETTABLE is the destination table. You can find valuable options that enable you to import appending into an existing table, import replacing an existing table, or import updating an existing table. For a detailed discussion on the Import and Export utilities in DB2 UDB V7.2, refer to Data Movement Utilities Guide and Reference, which you can find on the Web at: http://www-4.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/document.d2w/ report?fn=db2v7dmdb2dm07.htm#HDREXPOVW 6.5.2 Using the CPYTOIMPF command and the Import utility Similarly, you can use the CPYTOIMPF command to create a delimited file (also known as a CSV file) and FTP it into a DB2 UDB V7.2 workstation as described in 6.3, “DB2 UDB for iSeries Export utility” on page 138. Then you can use the Import utility on the DB2 UDB V7.2 workstation as shown in the following example: IMPORT FROM C:\TEMP\DEF_FILE OF DEF INSERT INTO DB2ADMIN.TARGETTABLE IMPORT FROM C:\TEMP\DEF_FILE OF DEF CREATE INTO DB2ADMIN.TARGETTABLE2 IMPORT FROM C:\TEMP\DEF_FILE OF DEF INSERT_UPDATE INTO DB2ADMIN.TARGETTABLE2 154 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 155 Part 3 Database administration Operations Navigator offers a Windows-like graphical interface to configure, monitor, and manage the OS/400 environment. This part gives you insight into the wide range of DB2 Universal Database for iSeries database administration functions available through the Operations Navigator graphical interface, which comes packaged with Client Access Express for Windows V5R1. This part of the book covers the following topics:  Database functions using Operations Navigator  The use of Database Navigator  Reverse engineering and Generate SQL  Visual Explain Part 3 156 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 157 Chapter 7. Database administration This chapter discusses using the Database component of Operations Navigator to administer, view, and manipulate your databases. This chapter discusses:  Basic database operations  SQL scripts  Query attributes  SQL Performance Monitors 7 158 Advanced Functions and Administration on DB2 Universal Database for iSeries 7.1 Database overview The Database component of Operations Navigator provides a graphical interface for many DB2 Universal Database for iSeries database operations, including:  Creating and managing tables, views, indexes, SQL, and stored procedures  Creating and managing OS/400 journals (record changes to the database and other functions supporting journals)  Entering new, or modifying already created, SQL statements  Running and debugging previously created SQL statements (referred to as scripts)  Saving SQL statements for later use  Doing performance analysis of SQL statements  Capturing current SQL statements for any job running on the system The Database component of AS/400 Operations Navigator is not installed by default when choosing a Typical installation option of IBM AS/400 Client Access Express. If the Database component is not currently installed, you can run Selective Setup to install it as discussed in Managing OS/400 with Operations Navigator V5R1 Volume 1: Basic Functions, SG24-6226. With proper authorization to the database objects, the user of the database graphical interface has easy access to OS/400 server administration tools, has a clear overview of the entire database system, can perform remote database management, and receives assistance for complex tasks. For OS/400 V4R4, key enhancements to DB2 Universal Database for iSeries included an interface to the SQL-specific performance monitor and new Universal Database Object Relational Support functions, such as various types of binary large objects (LOBs), user defined data types (UDTs), user defined functions (UDFs), and DataLinks. OS/400 V4R5 delivered a mix of enhancements across a wide variety of DB2 UDB functions and interfaces including:  Distributed Relational Database Architecture (DRDA) password encryption to improve the security of Internet and intranet solutions.  The iSeries Data Loader utilities (Copy From Import File (CPYFRMIMPF) and Copy To Import File (CPYTOIMPF) CL commands) were enhanced and made easier to use in V4R5.  iSeries external stored procedure support was upgraded in V4R5 with the addition of Java as a supported language  Further Java database improvements were made to the iSeries SQLJ support. For example, the implementation was re-engineered to deliver performance similar to static, embedded SQL and extended dynamic SQL.  The iSeries SQL CLI was significantly enhanced to make it more compatible with the ODBC standard.  On the performance front, the database engine was enhanced to reduce the number of cases where SQL open data paths (ODPs) are non-reusable.  Operations Navigator was enhanced to improve the manageability of DB2 Universal Database for iSeries. The major additions include a Display Physical File Description (DSPFD) type of output for tables, views, and indexes and Visual Explain for graphical analysis of query implementations. Chapter 7. Database administration 159  Porting the database components of an application to the iSeries server was much improved in V4R5. Improvements to the SQL Stored Procedure language and the SQL Call Level Interface (CLI) top the list of portability enhancements. The SQL Stored Procedure language has been available since V4R2 and has been widely used to successfully port procedures written in proprietary languages such as Transact SQL and PL/SQL to DB2 Universal Database for iSeries.  Full autocommit support that improves the integrity transactions performed by ODBC- and JDBC-based client applications and lifts the restrictions that prevented stored procedures and triggers from making committable database changes. 7.1.1 New in V5R1 The new features in V5R1 include:  Database Navigator: A new component that gives a pictorial representation of a schema and DB objects and can generate SQL for those objects  Support for SQL Triggers creation from table properties  Generate SQL from existing DB objects (including DDS created)  RUN SQL script enhancements  The ability to print Visual Explain graphs  Database (SQL) V5R1 functions – Database Text Extenders (including Text Search Engine and XML Extenders) – DRDA 2 Phase Commit Over TCP/IP – DRDA Result Sets – RIGHT OUTER Join – Expressions in INSERT – SQL Triggers – Up to 300 triggers per table – Column triggers – Read only triggers – Longer than 80 character columns in C and C++ pre-compilers – RUNSQLSTM support in OS/400 V5R1 – Support for user-defined functions implemented in Java – Increased large object (LOB) size – now 2 GB up from 15 MB – Maximum total size for all large objects in a table row is now 3.5 GB, up from 1.5 MB  The maximum number of rows allowed in a table increased from 2.1 billion to 4.2 billion in V4R5. The maximum table size remains half of a terabyte (TB). In addition, you can reference more tables – up to 256 – on a single SQL statement. A journal receiver maximum size increased from 2 GB to 1 TB, which reduces the frequency of changing journal receivers. Similarly, the maximum number of journal sequence numbers increased from 2 billion to 10 billion to reduce the frequency of sequence-number resets. These remain unchanged at V5R1. Although OS/400 integrated DB2 Universal Database for iSeries support is one of the major strengths of iSeries servers, a complete description of this support is beyond the goal of this redbook. Good sources for details of DB2 Universal Database for iSeries capabilities are:  iSeries Information Center: http://www.iseries.ibm.com/infocenter Here you can select Database and File Systems->Database management. Under Database management, by selecting DB2 Universal Database for iSeries books online, you can find a list of publications that contain even more information. Most of these publications are listed here. 160 Advanced Functions and Administration on DB2 Universal Database for iSeries  Database Programming, SC41-5701 This book describes database capabilities that are primarily outside of SQL terminology. This includes physical files (correspond to SQL tables), logical files (correspond to SQL views), fields (correspond to SQL columns), records (correspond to SQL rows), file management, and file security.  SQL Programming Guide, SC41-5611  SQL Reference, SC41-5612  DB2 UDB for AS/400 Database Performance and Query Optimization: http://submit.boulder.ibm.com/pubs/html/as400/bld/v5r1/ic2924/index.htm  Distributed Data Management, SC41-5307  Cross-Platform DB2 Stored Procedures: Building and Debugging, SG24-5485  DB2/400: Mastering Data Warehousing Functions, SG24-5184  DB2 UDB for AS/400 Object Relational Support, SG24-5409  DB2 Universal Database for iSeries home page: http://www.iseries.ibm.com/db2/db2main.htm Use this link to learn about the iSeries database, recent announcements, support information, and related products. This page features many useful links to database related issues and products (like Business Intelligence) and gives you access to a wealth of articles, white papers, coding examples, tips, and techniques.  Self-study lab exercise with sample OS/400 database, installation instructions, and lab instructions that can be downloaded from PartnerWorld for Developers iSeries Web site at: http://www.iseries.ibm.com/developer Select Education->Internet Based Offerings->DB2 UDB->Piloting DB2 UDB for iSeries with Operations Navigator in V5R1. Under OS/400, you can use SQL interfaces to access a database file or an SQL table since these terms refer to the same object, classified within OS/400 as a *FILE object type. You can use SQL interfaces to access the file regardless of whether the object was created with the OS/400 Create Physical File (CRTPF) command or the CREATE TABLE SQL statement. OS/400 also supports access to the physical file or table through a logical file (Create Logical File (CRTLF) command) or an SQL view (SQL CREATE VIEW). Table 7-1 shows the corresponding OS/400 term and SQL term for physical files or tables, records or rows, fields or columns, logical files or views, aliases, and indexes. Table 7-1 OS/400 term and SQL term cross reference OS/400 Create statement or term SQL Create statement OS/400 object type OS/400 object attribute SQL term CRTPF CREATE TABLE *FILE Physical File (PF) Table CRTLF CREATE VIEW *FILE Logical File (LF) View CRTDDMF CREATE ALIAS *FILE DDM File (DDMF) Alias CRTLF CREATE INDEX *FILE Logical File (LF) Index Field Column Record Row Chapter 7. Database administration 161 Note: A DDM File represents a Distributed Data Management File. This is the original OS/400 object on the local iSeries server used to provide a link to a file on a remote system. In the context of Table 7-1, an alias created by SQL has no remote system specification. To determine if the DDMF/alias has any remote system specification, you can use the Work with DDM File (WRKDDMF) command. Throughout the remainder of this chapter, the SQL terms table, row, and column are used more frequently than their corresponding OS/400 terms file, record, and field. In some cases, both corresponding terms, such as field or column, are used. 7.2 DB2 Universal Database for iSeries through Operations Navigator overview In the Operations Navigator window, click the + (plus) sign next to the Database function for the system to which you are attached to see the three major function areas as shown in the left pane and right pane in Figure 7-1. Figure 7-1 Database function list view There are several other ways to make the same three database function areas also appear in the right pane as shown. This chapter discusses some of these ways. However, Operations Navigator database capabilities are actually grouped under four functional branches:  Database  Libraries  Database Navigator  SQL Performance Monitors Note: OS/400 supports an object type of table (*TBL). This object type is for data translation. 162 Advanced Functions and Administration on DB2 Universal Database for iSeries The following sections summarize the capabilities under each of these four major database function groupings. Examples and tips on usage are given for selected sub-functions under each major function group to highlight Operations Navigator interfaces into the wide range of DB2 Universal Database for iSeries capabilities. These sections do not explain every action on every pull-down menu, but instead emphasize the actions that are most significant. Such actions as Explore, Open, Shortcuts, and Print options are very similar to these same actions described for Operations Navigator interface in Managing OS/400 with Operations Navigator V5R1 Volume 1: Basic Functions, SG24-6226. For some other database-specific actions or options, you must refer to Operations Navigator online help information. For the database functions described in the following sections, you need the appropriate authority to perform the functions. You can use the SQL GRANT and REVOKE statements to define authority to a table, view, procedure, user-defined functions, and user-defined types. For tables and views, these statements may also specify processing authority, such as SELECT (read), INSERT (write), DELETE, and UPDATE. SQL GRANT and REVOKE can also specify column-level authorities. The Operations Navigator Database interface supports table, view, index, procedure, column, etc. database-related object levels of authority through the Permissions action by right-clicking the database object name within a library. You can specify permissions for all OS/400 objects, including database-related objects through Operations Navigator File Systems interface. An alternative to column-level authority is to use an SQL CREATE VIEW to a table or a Create Logical File (CRTLF) command based on a file and specify only certain columns or fields. Then you specify authorities or permissions to the logical file or view. SQL CREATE VIEW or CRTLF can also specify compare values for columns or fields that limit the rows or records that can be seen by those authorized to the view or logical file. For additional details on the Object Relational Support items (functions and types), refer to DB2 UDB for AS/400 Object Relational Support, SG24-5409. For authority implications of using *SYS or *SQL naming convention when creating new DB objects with Operations Navigator, refer to document number 9510127 in the Support Line Knowledge Base at: http://as400service.ibm.com/supporthome.nsf/document/10000051 Chapter 7. Database administration 163 7.2.1 Database functions overview In the Operations Navigator window, right-click Database to access the pop-up menu shown in Figure 7-2. Figure 7-2 Database pop-up menu functions The possible actions are:  Explore: The right pane displays the three other major database function areas: – Libraries – Database Navigator – SQL Performance Monitors Important iSeries software requirements: Base OS/400 provides SQL “run time support”, not “program development for SQL support”. Run time support includes the following uses of SQL with no SQL software installation required:  All Open Database Connectivity (ODBC) support, which includes Operations Navigator functions and Run SQL Scripts jobs and client workstation jobs using Client Access ODBC support, such as a Visual Basic program  All Java Database Connectivity (JDBC) support, which includes client workstation Java applets and local iSeries Java servlets accessing JDBC  DB2 Universal Database for iSeries support from an already compiled (created) local iSeries program using embedded SQL in the RPG, COBOL, or C program  DB2 Universal Database for iSeries support from an already compiled (created) local iSeries program using the SQL Call Level Interface (CLI) in RPG, COBOL, C, or Java  Use of the RUNSQLSTM command To use DB2 Query Manager support or to compile (create) local iSeries programs using embedded SQL, such as iSeries RPG, COBOL, and C programs, you must have licensed program DB2 Query Manager and SQL Development Kit, 5722-ST1 (5769-ST1 for releases prior to V5R1M0). This is for program development support. 164 Advanced Functions and Administration on DB2 Universal Database for iSeries  Open: This is the same as choosing Explore, except that the contents of the selected file system are displayed in a separate window.  Change Query Attributes: This enables you to specify attributes for database queries and database file keyed access path (index) builds, rebuilds, and maintenance that are run in a job. Query attributes may be specified through the OS/400 Change Query Attributes (CHGQRYA) command. In Operations Navigator, Change Query Attributes provides a graphical interface to apply a superset (more than CHGQRYA provides) of query attributes as stored in a file. These attributes can be applied to one or more active jobs that can be selected from a list. OS/400 supplies a read-only version of the query attributes file–QAQQINI in library QSYS. Run SQL Scripts within Operations Navigator defaults to using the QAQQINI file in library QUSRSYS. You must copy the base QAQQINI file in QSYS into library QUSRSYS if you want Operations Navigator to use its values system wide. Or use the following CL command in the Run SQL Scripts window to default your job to another library where you previously copied the QAQQINI file: CL: CHGQRYA QRYOPTLIB (yourlib); If there is no QAQQINI file in QUSRSYS, internal defaults are used. You can use the Change Query Attribute graphical interface to easily make a copy of the default QAQQINI file in a library of your choice and to change the default values to what is most suitable for your job. We document this interface in 7.4, “Change Query Attributes” on page 217. Any changes to the attribute values are typically determined by an experienced query programmer. You can find the best explanation of how to use these attributes in DB2 UDB for iSeries Database Performance and Query Optimization, which you can find on the Web at: http://submit.boulder.ibm.com/pubs/html/as400/bld/v5r1/ic2924/index.htm In addition to CHGQRYA, you can specify a subset of the query attributes available under Operations Navigator through the OS/400 system values QQRYTIMLMT (time limit) and QQRYDEGREE (degree).  Current SQL for a Job: With this feature, you can select any active job on the iSeries server and display it through the automatically linked Run SQL Scripts option, the SQL statement, if any, currently being executed in the job. In addition to displaying the SQL statement, you can edit or rerun it; you can also display the job log for the selected job or end the job. This can also be used for database usage and performance analysis, linking into the Visual Explain tool documented in Chapter 10, “Visual Explain” on page 301.  Run SQL Scripts: This enables you to enter, edit, run, save, and debug SQL statements across tables within all libraries (includes SQL collections). You can run all supported SQL statements from this action. OS/400 provides a set of base SQL statements for all supported functions that you can select and insert into your SQL statements. You can enter a completely new SQL statement or modify an already available statement for your own unique queries. You can also run CL commands. You can save your own newly created or modified base statements for later use. You must have appropriate file or table and field or column authorities (permissions) to perform the functions at run time. Section 7.3, “Run SQL Scripts” on page 197, shows several examples of building and running SQL script. Restriction: You must have job control (*JOBCTL) special authority to use this function. Chapter 7. Database administration 165  Properties: This enables you to specify to refresh the current display every time a list is displayed or after a time interval is specified in minutes. There are several actions or functions available from the menu bar options for the Database, Libraries, Database Navigator, and SQL Performance Monitors function groupings. This chapter discusses a subset of all of these actions or functions. You must review the online help text for a description of the entire set of actions or functions. 7.2.2 Database library functions overview You can create, delete, and assign permissions (authority) to an OS/400 library under this group of functions. You can also display the list objects within a library and create, change, delete, or assign authorities (permissions) to an SQL table, view, alias, index or OS/400 journal, or OS/400 journal receiver listed within the library. Figure 7-3 shows an example display after expanding the Libraries function and then right-clicking Libraries to see the context menu. Figure 7-3 Database library actions In this example, you see the library names ITSCID63, PORTERL, QGPL, and SQLLIB. These libraries were currently specified in the Initial Library List (INLLIBL) parameter of the OS/400 job description object used by the Operations Navigator session to the iSeries server. The job description is associated with the OS/400 user profile you used to sign on under Operations Navigator when connecting to your iSeries server. By default, only the libraries in the user portion of your iSeries library list are included under the Database component, plus any other library you asked to add here in previous Operations Navigator working sessions. You can add more libraries. Simply click Select Libraries to Display in the pop-up window by either entering a library name or selecting from a list of library names on the system. Then, click the Add button as shown in Figure 7-4. 166 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-4 Database: Adding a library to your library list This change is retained across subsequent new Operations Navigator working sessions. This is done by maintaining a table on the host iSeries, QAUGDBLL in QUSRSYS, which lists all users for the Database portion of Operations Navigator and all the libraries that were chosen to work with using this interface. If you want to remove a library from this list, repeat the previous steps, but select Remove. 7.2.3 Creating an OS/400 library or collection There are several Operations Navigator higher level branches from which you can create an OS/400 library. This section discusses creating a library by selecting Database->Libraries and going to the New Library function as shown in Figure 7-5. Note: Any library added here may be used when you perform actions or functions under this Libraries function of Database. Any library added here is not automatically used by the actions or functions under other database sub-components such as Run SQL Scripts. Nor is it added to the user’s library list on the iSeries server. In other words, while the original list of libraries shown in Operations Navigator is built on the user’s library list, a change in the library list for this interface is not going to affect the user’s library list on the iSeries server. Chapter 7. Database administration 167 . Figure 7-5 Database: Creating a new library A library can contain any supported iSeries object type. However, under the Operations Navigator’s Database interface, you only work with objects related to database support. Under the New Library function, you can create a new library, or you can create an SQL collection. In OS/400, an SQL collection automatically builds a library, and within that library, it creates:  A journal  A journal receiver  A catalog  Optionally, a data dictionary A data dictionary is used for migrated System/36 application environments. When you select the Add to list of libraries displayed check box (circled in Figure 7-5), the newly created library is added to the user’s list of libraries in Operations Navigator working session. This has no affect on the iSeries library list. The library can be placed into the system auxiliary storage pool, ASP1 (default) or a user-defined ASP2 (up to 32). An ASP is a defined set of disk devices that contain only objects created into a library within that ASP. As shipped from IBM, ASP1 contains all disk devices. A user-defined ASP is typically used for a specific performance requirement to reduce disk arm movement or for a specific backup and recovery procedure. For more information about ASP support, journaling support, and overall backup and recovery, please refer to:  iSeries Information Center at: http://www.iseries.ibm.com/infocenter When you reach this site, select Backup, Recovery, and Availability.  Backup and Recovery, SC41-5304 All database-related objects, such as tables, views, journals, and other system objects, like programs, message queues, output queues, and so on, can be created, moved, or restored into any iSeries library. All iSeries tables or files can be created or moved into an SQL collection if the SQL collection does not contain a data dictionary. 168 Advanced Functions and Administration on DB2 Universal Database for iSeries An SQL collection can also contain catalog views that have descriptions and information for all tables, views, indexes, files, packages, and constraints created in the library. All tables created in the SQL collection automatically have journaling performed on them. When referring to an SQL collection in iSeries documentation and screen panels, the collection name and the library name refer to the same object. 7.2.4 Library-based functions Refer to the display shown in Figure 7-6 to see an overview of the functions that are available when you select a specific library. These database functions include:  Assigning authorities (Permissions) to the library and objects within the library  Totaling the number of files and folders (directories) in the library and the storage size of all objects within the library (Properties)  Creating new tables (Table) in the library  Creating new views (View) and aliases (Alias) in the library A view is an object that permits access to a subset of all rows in a table and columns within a row. An alias is an object that allows SQL applications to reference a table or view by another name. In addition, aliases provide an easy way for SQL applications to access data in multiple-member DB2 UDB for iSeries files. In SQL standards, a table represents only one set of data (rows). OS/400 file support includes multiple members, which are sets of records or rows that contain the same field or column definitions, but different sets of data. For example, the MONTHS file can contain a set of rows for January data (member name JAN) and another set of rows for February data (member name FEB). At run time, a command parameter for Member Name (MBR) could specify JAN one time and FEB another time. Opening an SQL alias provides an equivalent function.  Creating new journals to be used with the tables, views, or aliases  Creating new SQL procedures  Creating new user defined functions (Function)  Creating new user defined types (Type) Chapter 7. Database administration 169 Figure 7-6 Database Library functions To bring up the Library functions, right-click a specific library (PFREXP in our example). In this example, we double-clicked PFREXP or selected the Open option in the pull-down menu (1) to see the database-related objects in this library in the right pane. By selecting New in the Library pull-down menu, the next level of objects to create (Table, View, and so forth) are shown in the menu (2). Before we explain more about creating these objects, we discuss the existing objects shown for library PFREXP. We created an alias CSTFILTST (3), which accesses the CSTFIL file, with the member name CST. CSTFIL is shown as a table (7), but was originally created with the OS/400 Create Physical File (CRTPF) command. SQL index object ITMFILIX is at 4. This object was created, based on the ITMFIL table. Section 7.2.5, “Object-based functions” on page 181, explains how to create an index. OS/400 files created through the Create Physical File (CRTPF) and Create Logical File (CRLF) OS/400 commands have access paths (indexes) if key fields are specified, but they are not visible as a separate object of type index. The BUPJRN journal is at 5. Each journal can have one or a pair (dual) of attached journal receivers (where actions on the table or data within the table are actually recorded). BUPJRA and BUPJRB, as the original dual journal receivers, are shown at 6. In our example, BUPJRA and BUPJRB already reached their maximum space for journal entry information. Through journal configuration parameters, a second set of dual journal receivers, BUPJRA1000 and BUPJRB1000, have been created by OS/400, with the system generated 1000 suffix. They are now the attached (receiving entries) journal receivers. A series of tables including CSTFIL, CSTMSTP, CSTMSTRP, and ITMFIL are shown at 7. The two views CSTMST and CSTMSTR are shown at 8. 3 3 4 5 6 7 8 1 2 170 Advanced Functions and Administration on DB2 Universal Database for iSeries Physical file and SQL TABLE differences The OS/400 Create Physical File (CRTPF) command and the SQL CREATE TABLE statement (implicitly used by the Operations Navigator New->Table dialogue) create an OS/400 object type of *FILE. There are CRTPF command OS/400 parameters that have no corresponding CREATE TABLE parameter. These parameters are part of every *FILE object within OS/400 and affect the operating environment when accessing the file or table. Therefore, when you use an SQL-based interface to create a table, OS/400 uses default values for these CRTPF-only parameters. These CRTPF-only parameters include:  Maximum members (MAXMBR parameter): OS/400 physical files can have multiple members (same record layout and field attributes, different sets of records or rows). All SQL tables default to a value of 1. This is also the default for CRTPF, but the user can specify a number or *NOMAX (no limit on the number of members).  Member size (SIZE parameter): OS/400 uses the number of records or rows value to implicitly allocate the initial amount of storage for the file or table. Other values in this parameter optionally specify how to allocate additional storage when the initial storage is exceeded. CRTPF defaults to 10000 records with up to an additional allocation of up 3000 records in 1000 record increments. A system operator message communicates each additional allocation. CREATE TABLE defaults to *NOMAX.  Reuse of deleted record or row storage (REUSEDLT and DLTPCT parameters): When a row or record is deleted, the storage previously occupied by the record or row remains as part of the total file or table storage allocation. DLTPCT is the percent of deleted records or rows compared to all active records or rows in the file or table. At file or table close time, if the number of deleted records or rows exceeds this percentage, a message is issued to the OS/400 History Log (viewed with the Display Log (DSPLOG) command). REUSEDLT specifies to OS/400 whether to insert a new record or row into a new physical storage space (REUSEDLT(*NO)) or into the storage of a previously deleted record or row (REUSEDLT(*YES). CRTPF defaults to DLTPCT(*NONE) and REUSEDLT(*NO). CREATE TABLE defaults to DLTPCT(*NONE) and REUSDLT(*YES). You can specify, change, and view the values for these and additional OS/400 parameters for a file or table by using the following OS/400 commands:  Create Physical File (CRTPF) command  Change Physical File (CHGPF) command  Display File Description (DSPFD) command Note: Regardless of the DLTPCT and REUSDLT parameter values for a file or table, you may have an application environment that you know or suspect may have files or tables with a large number of deleted records (for example, disk storage is increasing with no known increase in the number of new records). In this case, you should consider running the OS/400 Reorganize Physical File Member (RGZPFM) command or its equivalent Operations Navigator Database Reorganize function (see “Managing tables and views” on page 182) on a specific file or table. You can use the DLTPCT parameter message to assist you. Alternatively, you can periodically use the Display File Description (DSPFD) command with TYPE parameter specifying *MBRLIST to see both the number of records or rows in the file or table and the number of deleted records in each member of the file or table. Chapter 7. Database administration 171 For more information on these and other file attributes, refer to Database Programming, SC41-5701, and CL Reference, SC41-5722. You can view the above mentioned settings and other file or table parameters, such as database constraints and triggers, in Operations Navigator by right-clicking the table and selecting the menu options Table Description and Properties. Create Table example To create a new table (or file) on the iSeries server with the traditional interface, you can use DDS or the CREATE TABLE SQL statement. In both cases, you need the appropriate skill, whether it is programming with DDS or SQL knowledge. Follow these steps in Operations Navigator to create a new table: 1. Click Database->Libraries. Then, right-click the library PORTERL in which you want to create the new object. You are presented with a list of choices, as shown in Figure 7-7. Figure 7-7 Create Table example (Part 1 of 3) 2. Select New->Table to access the panel where you specify the table name and description for the new table (see Figure 7-8). Figure 7-8 Create Table example (Part 2 of 3) 3. Click OK and you see the panel where you can specify the columns for the new table (see Figure 7-9). Click the Insert button (1) to insert a new column, and specify the column 172 Advanced Functions and Administration on DB2 Universal Database for iSeries name, type, length, and an optional description. You can also specify a short name (up to 10 characters), column heading (up to three lines of 20 characters each), must contain a value (not null), default value, CCSID and a length to allocate (for VARCHAR and VARGRAPHIC datatypes). Figure 7-9 Create Table example (Part 3 of 3) 4. Use the pull-down list in the Type column (2) to choose the data type for the column. The content of this list depends on the version and release of OS/400 installed on your iSeries server. Since V4R4 DB2 UDB for iSeries added support for BLOB, CLOB, DBCLOB, and datalink data types, these values only appear in the list if your iSeries server is running V4R4 or a later release. 5. When finished with inserting columns, click OK to create the table or select any other item you may need to work on (constraints, indexes, triggers, etc.). Create SQL View example A view is typically used to represent a subset of the columns in a table and, if specified, a subset of the rows in the table. For example, you have a customer table file that has several columns describing the customer, including customer number (key field), customer description, customer address, and customer telephone number. You want to show someone the customer number, customer name, and customer telephone number, but not their address. You also know that customers with a customer number greater than 500 do not want their telephone numbers known. The following steps show you how to create a view (CUST_DIMVU) over the CUST_DIM table: 1. Right-click the library (PORTERL), and select New->View to access the new view panel shown in Figure 7-10. 2 1 2 Chapter 7. Database administration 173 Figure 7-10 Create View example (Part 1 of 6) 2. Enter the name, CUST_DIMVU, for the new view and the description. The Check option specifies whether some type of data validity checking will be performed on an update or insert operation. You must view the help information for additional details. We selected None (default) in our example. 3. Click OK to see the panel with blank input areas, as shown in Figure 7-11. Click the Select Tables button (1) to bring up the current library list for your current Operations Navigator working session. Figure 7-11 Create View example (Part 2 of 6) 4. Previously we selected library PORTERL to create the view. However, the view can be created to use tables in various libraries. To keep this example simple, we select the tables from library PORTERL. 5. To see the tables within a library, either click the + (plus) sign next to the library name or position the mouse on the library and double-click. Select the table, and click the OK button, which places the column names in the upper pane in the area (2 in Figure 7-11). 6. As shown in Figure 7-11, select your table from the library. We selected the CUST_DIM table from the PORTERL library, and clicked OK. We can select another table from the 2 1 174 Advanced Functions and Administration on DB2 Universal Database for iSeries library. We selected the PART_DIM table from the PORTERL library and then clicked the Add button. This places the columns of both the CUST_DIM (1) and PART_DIM (2) tables into the upper pane (see Figure 7-12). We chose two tables to show an example of how Operations Navigator assists you in building SQL statements that could become quite complex. As shown in Figure 7-12, we selected the CUSTKEY and CUSTOMER columns from CUST_DIM and dragged and dropped them into the lower pane. You can see the arrow to the left of the CUSTOMER column (3), which indicates that the next column selection will be inserted after this statement. Figure 7-12 Create View example (Part 3 of 6) You can reposition this arrow for the next insert of a new column by clicking any existing column in the lower pane. In this example, we selected the PHONE column, but have not yet dragged it to the lower pane. In this example, we create a view using only columns from the CUST_DIM table. If you select multiple tables to appear in the upper pane, Operations Navigator expects a JOIN clause in the VIEW statement and issues a message indicating this later if you continue showing more than one table in the upper pane. Since we are only going to use the CUST_DIM table, we select the PART_DIM table in the upper pane and press the Delete key to delete this table from the upper pane. The PART_DIM column names no longer appear in the following displays. 7. As shown in Figure 7-13, we completed a column selection for the CUST_DIMVU view and clicked the Select Rows button. The Select Rows button enables a WHERE clause. The Select Rows window shows the table columns, operators, and functions available in the upper pane. Once a column operator or function is selected (by double-clicking), it is inserted into the Clause pane. You may also manually enter your own text into the Clause area as we did by entering the value 500. Note: If you click the Summary Rows button, the HAVING clause is enabled. 1 2 3 Chapter 7. Database administration 175 Figure 7-13 Create View example (Part 4 of 6) As soon as you have at least one SQL column in the Table pane (1) or text in the Clause pane (2), you can use the Show SQL (3) button to view the current SQL statement. We clicked the Show SQL button to generate the Show Generated SQL window shown in Figure 7-14. Figure 7-14 Create View example (Part 5 of 6) 1 2 3 176 Advanced Functions and Administration on DB2 Universal Database for iSeries 8. In this window, click the Check Syntax button to view the generated SQL and have syntax checking performed. You cannot edit any text on this window. 9. If you are satisfied with the current SQL statement, you can click the OK button twice on successive windows, and the View is created, assuming no errors are detected. Depending on your Operations Navigator refresh setting, a new view appears in an updated display showing the contents of the library, such as the example shown in Figure 7-7 on page 171. 10.To edit the generated SQL, click the Edit SQL button (1), which opens the Edit Generated SQL window shown in Figure 7-15. Figure 7-15 Create View example (Part 6 of 6) 11.In Figure 7-15, the SQL statement area now has a white background. Here, you can enter any characters and also have your syntax checked by using the Check Syntax button (2). 12.After we validated the SQL syntax, we clicked the Submit button (3 in Figure 7-15). Then, the view was created successfully as indicated by the Information window shown. Edit SQL tip: If you make changes through this Edit SQL process, the changes are not saved. You may make changes and successfully create the view as we have done by using the Submit button. However, the changes are not saved in this dialogue because you must exit the Edit SQL function by clicking the Cancel button or using the Windows cancel (X button). SQL changes are not saved because they could be extensive. You can even change the name of the view and the library that is already specified. 1 2 3 Chapter 7. Database administration 177 Create journal example A journal is an object used to record actions on database tables or files and other objects or software that support journaling, such as system auditing. For DB2 UDB for iSeries, journals are typically used to recover from application errors or unscheduled iSeries server outages. Commitment control, as discussed in 7.3.1, “ODBC and JDBC connection” on page 202, requires journaling to implement its COMMIT and ROLLBACK functions. OS/400 uses the journal object as a front-end interface to an attached object, which is a journal receiver that actually contains the journaled data. Each set of related journal data is recorded as a journal entry. Examples of non-DB2 UDB for iSeries software functions that optionally use journals and journal receivers include:  OS/400 security: Action auditing  OS/400 job accounting  TCP/IP-based functions, including IP filters, IP network address translation (NAT), and virtual private network (VPN)  OS/400 software license management tracking Applications can also use OS/400 commands and System Application Program Interfaces (APIs) to write to and read journal entries. OS/400 supports defining and using remote journals as well. A journal associated with a local journal can be defined to reside on a remote iSeries server. The remote journal can be defined so that OS/400 automatically sends journal entries made on the local iSeries server to the corresponding remote iSeries server journal. The primary intent of remote journal support is to quickly and easily replicate data onto a backup iSeries server in a high availability environment where the backup iSeries server can switch over to become the production iSeries server, if an unscheduled outage occurs on the primary iSeries server. To create and set up remote journaling through Operations Navigator, you must first create the local journal and journal receiver. Then use the Properties support for the journal to access actions that set up a remote journal. “Managing journals and journal receivers” on page 192 shows an example of journal and journal receiver properties. The following example shows how to create a local journal (CUST_DIMJ) in the PORTERL library and create its associated journal receiver in the JRNLIB library: 1. Right-click the POTERL library. Select New->Journal to access the New Journal panel (1 in Figure 7-16). 178 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-16 Creating the Journal and Journal Receiver 2. In the New Journal panel, enter the journal name, library, and description. Name the library to hold the journal receiver. You can select a library from a list of the current Operations Navigator session’s library list, except for the library named to contain the journal. In our example, the PORTERL library would not appear in the list. Although you can place a journal receiver in any library you want, including PORTERL in our example, the OS/400 recommendation is to place the journal receiver in a library separate from the library that contains the journal itself. Another recommendation for OS/400 journaling support is to place the library used for the journal receivers in its own user-defined ASP. In our example, we specify library JRNLIB to emphasize a different library for the receiver. JRNLIB must already exist. 3. Click the OK button in the New Journal panel (1). The journal is created, along with an attached journal receiver with a default name and default attributes. If you click the Advanced button, you see the Advanced Journal Attributes panel (2 in Figure 7-16). You see the default attributes that were used in our example to create the CUST_DIMJ journal. If you click the New Receiver button, you see the New Journal Receiver pane (3 in Figure 7-16), which shows the default new journal receiver attributes. Once the journal is created, right-click it and select Properties from the drop-down menu. Then the Properties window appears as shown on the right-hand side in Figure 7-17. On this window, you can start journaling for a table or a group of tables. To do this, click the Tables button. 1 3 2 Chapter 7. Database administration 179 Figure 7-17 Selecting the tables to journal (Part 1 of 2) When you click the Tables button, the Start/End Journaling panel (Figure 7-18) appears. On this display, select the CUST_DIM table and click the Add button to the left of the Tables to journal pane to add it to the list of tables to be journaled in the CUST_DIMJ journal. Click OK to start journaling the CUST_DIM table. 180 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-18 Selecting the tables to journal (Part 2 of 2) The following summary describes the key journal and journal receiver attributes. For a full discussion on these journaling attributes, refer to Backup and Recovery, SC41-5304. Advanced journal attributes Listed here are the advanced journal attributes. Refer to the Advanced Journal Attributes window (2 in Figure 7-16).  Journal message queue: OS/400 issues specific messages for specific changes to the journaling environment. The typical reason for a journaling message is when a journal receiver is reaching its threshold of maximum entries, a message is issued indicating that the current receiver should be detached and a new, fresh journal receiver should be attached. The default message queue is “System Operator”, which is actually message queue QSYSOPR. In some environments, you may choose to manage your own journaling support, or you may have an application that manages the journaling through software. In those cases, you may want to use a message queue other than QSYSOPR.  Receiver managed by – System: By clicking System, you tell OS/400 to automatically detach the current journal receiver and attach a new one when the journal receiver storage space threshold has been reached or when the attached journal receiver’s sequence number has reached a value of 1 TB. Each time the system attaches a new journal receiver to the journal, the journal receiver sequence number is incremented by one. In addition, the system resets the receiver sequence number during IPL, provided the receiver is not required for commitment control recovery. See Commit mode under 7.3.1, “ODBC and JDBC connection” on page 202, for information on commitment control. Under system managed receivers, you can also specify that OS/400 delete receivers when they are no longer needed. If you do not choose this option, the detached receivers remain on the system until you delete them.  Receiver managed by – User: By clicking User, you assume the responsibility for changing journal receivers and determining when to delete receivers you no longer need.  Minimize fixed portion of entries: By clicking this option, you remove job, program, and user profile information from each journal receiver entry. In a busy journaling environment, Chapter 7. Database administration 181 this can significantly reduce storage space required, but restricts selectivity by other OS/400 journal entry support.  Remove internal entries: Depending on what is being journaled, OS/400 sometimes puts its own entries into a journal receiver. By selecting this option, OS/400 deletes these entries from the journal receiver when the system determines they are no longer needed. One good example of these internal entries is those made to support System Managed Access Path (table index) Protection (SMAPP) support. SMAPP journals changes to access paths (that is, key columns or fields) independent of whether you use journaling of database tables or files. SMAPP is intended to minimize access path recovery following an abnormal system termination. Journaling access path changes helps SMAPP do this. To enable SMAPP, you use the OS/400 Edit Recovery for Access Path (EDTRCYAP) command as explained in Backup and Recovery, SC41-5304. New journal receiver attributes Listed here are the new journal receiver attributes. Refer to the New Journal Receiver window (3 in Figure 7-16 on page 178).  Journal name and description: Enter here the journal receiver name and journal receiver descriptive text. As shown, the journal name and description are the default values generated by Operations Navigator. These values are used if you never select the New Receiver button in the Advanced Journal Attributes pane.  Library: Enter the journal receiver library. The default value shown (JRNLIB) was specified on the initial New Journal panel (1 in Figure 7-16 on page 178).  Storage space threshold: Enter the maximum storage in megabytes that the journal receiver can take. You see the default value of 500 MB. The value 500 MB is specified as 500000 KB on the corresponding OS/400 Create Journal Receiver (CRTJRNRCV) command Threshold parameter. The number of journal receiver entries this space can contain depends on the amount of data contained in each entry. When this threshold is reached, a message is sent to the message queue specified in the window pane (2 in Figure 7-16 on page 178). See the online help information for additional details. In addition to the powerful Operations Navigator interface for creating and managing journals and journal receivers discussed in this section and in 7.2.5, “Object-based functions” on page 181, there are several OS/400 journal creation and management commands. To view these commands and access the related online 5250 display-based help information, enter the following command on a 5250 command line: GO CMDJRN 7.2.5 Object-based functions When you right-click a specific database-related object, a pull-down menu appears with functions that are unique for that object type. At this specific object-level interface, you have some additional create functions and a wide range of management functions. Object-based functions for a database include:  Managing a table and view  Adding and managing constraints and triggers for a table  Assigning and changing authorities and permissions to these objects  Creating and managing an index for a table  Managing a journal  Adding and managing an associated journal receiver or a remote journal 182 Advanced Functions and Administration on DB2 Universal Database for iSeries Managing tables and views Right-clicking a table brings up a menu similar to the example shown in Figure 7-19. Figure 7-19 Managing table actions For the CUST_DIM table, these are the actions:  Open: This displays, in the right pane, the first “n” rows of the table. The number of rows displayed and the number of columns displayed for each row depend on the window size, which can be adjusted to be shorter or longer (less or more rows) or narrower or wider (less columns or more columns). With the right permissions, you can update columns, delete rows, and insert new rows. OS/400 issues an error message if you try to make invalid changes to a table. See “Open table example” on page 183.  Quick View: This displays the table data as Open does, but is a read-only view. No changes can be made to the data.  Table Description: Using this item, you can gather similar information as you would using the DSPFD command on the iSeries server. However, in Operations Navigator, you are also allowed to change some attributes such as Reuse of Deleted Records and Share Open Data Path. See “Table Description example” on page 184 for details.  Locked Rows: When the table is in use, this displays whether records are locked, their relative number, which job (fully qualified job name) is actually locking them, and whether the lock type is Read or Update. From this panel, it is also possible to access the locking job’s job log, check what SQL statement is used, and copy it to a Run SQL Scripts instance to work with it. The interface also allows you to end the locking job. “Working on locked rows” on page 196 explains how to use it.  Create Alias: An alias is an object that allows SQL applications to reference a table or view by another name. In addition, aliases provide an easy way for SQL applications to access data in multiple-member iSeries physical files.  Reorganize: This enables you to reorganize the rows within the table according to a specified table key or a named index, or by compressing storage currently occupied by deleted rows. Chapter 7. Database administration 183 If your application frequently inserts new rows and then deletes them, such as in a work file, you should consider using the compression of deleted rows function.  Journaling: This option displays information about any journal that is currently or last associated with a table. If the status shows “Never journaled”, you can start journaling the table by specifying the name of an existing journal in an existing library, selecting the Journal images before change option (to journal both before and after images), and clicking Start.  Generate SQL: Create SQL source statement for the table. See Chapter 10, “Visual Explain” on page 301.  Permissions: This enables you to view and change user profile and public authority or permissions to the table and its columns. See the security chapter in Managing OS/400 with Operations Navigator V5R1 Volume 1: Basic Functions, SG24-6226, for a general discussion on Operations Navigator Permissions support.  Cut: This enables you to select a database object and drag and drop it to a different library. When the drop is completed, the database object is deleted (cut) from the original library.  Copy: This enables you to select a database object and drag and drop it to a different library or in the same library. When the drop is completed, the database object exists in both the source and the target libraries.  Delete: This enables you to select a database object and permanently delete it after you confirm the delete of the object.  Rename: This enables you to select a database object and rename it.  Properties: This enables you to select a database object and display its properties. Different property values are displayed depending on the object type. Also, depending on the object type, you may be able to add, change, or remove property values. For example, when you click Properties for an SQL-created view, you can see a read-only view of the SQL used to create the view. If you click Properties for a view created by Create Logical File (CRTLF) command, you see only a message panel that states there is no SQL statement available. Open table example Figure 7-20 shows some example windows when performing an insert, delete, or update to a table through Operations Navigator. This is the equivalent of using the Update Data (UPDDTA) command to make changes to the records in a physical file. Note: Although both the OS/400 Create Physical File (CRTPF) command and SQL CREATE TABLE create an OS/400 object of type *FILE, there are CRTPF command parameters that have no corresponding SQL CREATE TABLE parameter. Therefore, creating a table either via CREATE TABLE or by using the Operations Navigator New->Table interface requires OS/400 to use default values for these physical file parameters. One such parameter is the Reuse deleted record storage (REUSEDLT) parameter. See “Physical file and SQL TABLE differences” on page 170 for notes on creating a new table. New starting in V5R1 184 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-20 Open table example In the Insert or Delete window (1), you can insert a completely new row or delete an existing row, such as row 7 (customer number 7) in this example. For insert, you must enter the data according to each column’s valid data format or you receive a warning message. If you attempt to delete a row or update a column in a row (update window (2)), you see a warning message window similar to the one that is shown (3). This message cautions about recovering the original data if the table is not being journaled. Table Description example Right-click a table and select Table Description to see context information similar to what you see with the OS/400 DSPFD command. This interface provides information formatted on different notebook pages and is structured as follows:  General tab (Figure 7-21): Displays such information as the member name, its description, size, number of current and deleted rows (therefore, making it easy to judge whether reorganize can be useful), and maximum percentage of deleted rows allowed. Plus it gives you the ability to change the Reuse Deleted Rows (REUSEDLT) parameter and the table description. 1 2 3 Chapter 7. Database administration 185 Figure 7-21 Table Description panel: General  Allocation tab: Allows you to verify if there is a maximum number of rows set on the table, what was the initial number of rows, its subsequent increase, if there is a value set for forcing the writing of updates to auxiliary storage, and to change these settings.  Access Path: Shows the current size of the access path, the maximum size, the maximum key length, whether the access path is valid or shared, whether it is journaled, what the maintenance and recovery of the access path is set to, and other options for further details.  Usage tab: Contains information on creation, modifications, backups and if this files data can share open data paths across jobs (SHARE *YES/NO). The Share open data path check box offers an easy way for you to change this setting for the table.  Activity tab: Allows you to record the level of activity, documenting such information as the number of insert, update and delete operations, the logical and physical reads, clear operations, index builds/rebuilds, a full open and close, reorganize operations, and the number of rows being rejected in open operations (by key, non-key, group by/having selection methods). This tab also contains important information regarding the number of valid and invalid indexes built over the table.  Table Details tab (Figure 7-22): The last tab provides information on: – Creation – Number of allowed members Note: The Access Path tab is only visible for non-SQL described files. For SQL files, use the properties option to see indexes and views or display their individual descriptions. 186 Advanced Functions and Administration on DB2 Universal Database for iSeries – Maximum time a program is to wait for the file and its rows to be available – Maximum row length – Sort sequence – Language identifier – Format level check and identifier – Allowed activity level – Unique identifier for this table in the system – Disk pool it is using and if it is a distributed file Some of the above mentioned values can be changed using this interface. Figure 7-22 Table Description: Table details Changing properties: You must ensure that proper authorization or permission has been given to the Operations Navigator user to access the Table Description and Properties function for the object. You must also ensure the authorized user understands the importance of any table modifications they make. For example, the properly authorized user can delete fields or columns, and therefore, lose the associated data. Programs created (compiled) against the table that has a field or column added or removed may encounter an error during the next file or table open function. Through the Create Physical File (CRTPF) or Change Physical File (CHGPF) command, you can specify the Level Check (LVLCHK) parameter. A table with LVLCHK(*YES) specified detects the added or removed column during file open. Re-creating the program usually resolves the problem if the program does not need to use the column. A program that may have already been performing column validity checking performs unnecessary duplicate processing if a check constraint is added to the table. Chapter 7. Database administration 187 Table Properties example Right-click a table, and select Properties to display all the table properties. We use the initial properties panel (Column information) as shown in Figure 7-23 to discuss table properties:  Column properties  Key constraints  Indexes  Referential constraints  Triggers  Check constraints Figure 7-23 Table Properties example Column properties As shown in Figure 7-23, we moved the cursor to the CNAME field or column, as indicated by the arrow to the left of the column and the highlighted column description. In the upper column list, you see the column data type and length. In the lower pane, you see some information about the column, including the Coded Character Set Identifier (CCSID). The CCSID numeric value specifies how character data is stored on your system. For user-created tables, the character data is defaulted to be stored in the format according to your primary language ID. For example, on the systems used for this redbook, the OS/400 Language ID system value QLANGID is set to ENU – English for United States (uppercase and lowercase). The default CCSID value for ENU is 37, as shown in our Properties example. For more details on CCSID support, refer to AS/400 National Language Support, SC41-5101. The Browse button leads to a dialogue in which you can view other tables that you may want to use as a base definition to add (copy in) a new column to table CSTFIL. The New button enables the Column window shown at the top of Figure 7-23 to accept a new column definition. In the Column window, you enter the appropriate definition information. Select a column in the Column window, and click the Delete button to remove the column from the table. 188 Advanced Functions and Administration on DB2 Universal Database for iSeries You can make other changes or additions to the table and, when finished, click the OK button to make the changes permanent. The changes or additions are run as if you entered the ALTER TABLE SQL statement. If the table was created with the CRTPF command, the original file is deleted and the new file is recreated. The field or column deleted also deletes the associated data. Key Constraints Constraints place some controls on the action to an object or portion of an object. This Key Constraints tab enables you to add, modify, view, or delete the primary key and unique keys for a table. You may modify a constraint if it was defined during your current table editing session. If you added the constraint and then clicked OK on either the New Table dialog or Table Properties dialog, you may only view the constraint. A unique constraint is the rule that the values of the key are valid only if they are unique. Unique constraints can be created using the CREATE TABLE and ALTER TABLE statements. Unique constraints are enforced during the execution of INSERT and UPDATE statements. A PRIMARY KEY constraint is a form of the UNIQUE constraint. The difference is that a PRIMARY KEY cannot contain any nullable columns. Indexes Indexes are your specific definition of key fields or columns and the order of those fields or columns within the complete key. During performance analysis, the OS/400 query optimizer may issue a job log message that recommends a new index be created to improve performance. You can use SQL CREATE INDEX or this tab dialogue to create a new index. The Indexes tab enables you to add modify, view, or delete an index for the table with which you are currently working. You may modify an index only if it was defined during your current table editing session. If you added the index and then clicked OK on either the New Table button or Table Properties button, you can only view the index. Referential Constraints A referential constraint is where one or more columns of a table refer to values of columns in the table you are currently working on or another table that is referred to as the parent table for the current table. The Referential Constraints tab enables you to add, modify, view, or delete referential constraints for the table on which you are currently working. You may modify a constraint only if it was defined during your current table editing session. If you added the constraint and then clicked OK on either the New Table button or Table Properties button, you may only view the constraint. Triggers DB2 Universal Database for iSeries has supported native high-level language (HLL) system (external) triggers since V3R1. A trigger is program to initiate an action (trigger) when an event occurs on a database file/table (insert, update, or delete). Triggers can be initiated either before or after the event. Update triggers can differentiate whether a record/row was actually changed. You need to be cautious when using triggers. They offer powerful functions without knowledge of the current program, but they are called synchronously. If they do too much work before returning control to the original program, you may observe performance degradation. SQL Triggers are new for V5R1. For SQL Triggers, SQL code is used to create the trigger using SQL syntax. With native triggers, a program name is specified to execute (which could, of course, contain SQL). Chapter 7. Database administration 189 New for V5R1 is the support for up to 300 triggers per table. You are now provided with the option to add or replace triggers when you associate a trigger with a database table. Also new in V5R1 is the support for READ event system (external) triggers only – a trigger program that executes when a record is read from a database table. Since DB2 Universal Database for iSeries now supports both system (external) and SQL Triggers, the Operations Navigator Database component now interfaces to both kinds of trigger. Trigger definition and properties are part of the table Properties dialogue. See Figure 7-24. The Properties page for triggers has been changed to display a list of triggers for the table, which could include a mixture of both Native and SQL Triggers. Then the user can select a trigger and go to a more detailed Properties dialogue for that specific trigger. The properties dialogues are different for native and SQL Triggers. Figure 7-24 Defining an SQL Trigger From the General tab screen (Figure 7-25), you can add an SQL Trigger to your table, specify the event to fire the trigger, and indicate whether the trigger applies to the whole table or just the selected columns. 190 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-25 Defining an SQL Trigger: General tab The Timing page specifies the timing and frequency of the trigger and the resulting correlation names (Figure 7-26). Figure 7-26 Defining an SQL Trigger: Timing tab The SQL Statements page (Figure 7-27) contains the code for the SQL program that you are defining as a trigger. You can use the SQL statement examples and fill in the necessary information to make coding SQL easier. If you are adding a trigger to an existing table, you can check for syntax errors by clicking Check Syntax once you have the statement defined. A message is displayed for the first error detected, if any. To check for additional errors, click Check Syntax after the first error is fixed. This button is disabled when you add a trigger to a new table. Chapter 7. Database administration 191 After an SQL trigger has been created, the SQL statements cannot be changed. You will have to delete and recreate the trigger to change the SQL. Figure 7-27 Defining an SQL Trigger: SQL Statements tab On the Add System Trigger dialog, you can add a system (external) trigger to your table. System (external) triggers use a program object that already exists on the system. The program must exist before the trigger can be added. On pre-V5R1 systems, only the program name and library fields are active. See Figure 7-28. Figure 7-28 Defining a system trigger 192 Advanced Functions and Administration on DB2 Universal Database for iSeries Check Constraint A check constraint is specified at the field or column level. A check constraint examines the validity of the data in one or more of the columns in the same table. The Check Constraints tab enables you to add, modify, view, or delete check constraints for the table on which you are currently working. You may modify a constraint only if it was defined during your current table editing session. If you added the constraint and then clicked OK on either the New Table dialog or Table Properties dialog, you may only view the constraint. Database table constraints tips Constraints offer powerful system-provided DB2 UDB functions that need to be understood before you use them. In addition to the Operations Navigator graphical interface to constraints, OS/400 provides several commands to support constraints, such as the Add Physical File Constraint (ADDPFCST) and Remove Physical File Constraint (RMVPFCST) commands. You can access the full range of OS/400 constraints support by using the OS/400 Work with Physical File Constraints (WRKPFCST) command. For additional constraints information, refer to:  Operations Navigator online help information  iSeries Information Center (http://www.iseries.ibm.com/infocenter). You can use the search word constraints  Chapter 15, “Controlling the integrity of your database with constraints”, in Database Programming, SC41-5701  Online help for the OS/400 commands on constraints, accessed through the Work with Physical File Constraints (WRKPFCST) command Managing journals and journal receivers As discussed in “Create journal example” on page 177, a journal and its attached journal receiver record the changes and actions made to a table. Once you create a journal and its initial journal receiver, you can perform additional journal management by right-clicking either the journal or a journal receiver within a library. Figure 7-29 shows the actions that are possible on an existing journal. Chapter 7. Database administration 193 Figure 7-29 Managing a journal The actions are explained in the following list:  Starts and ends journaling: This action starts or ends journaling for one or more specific files or tables. Clicking this action brings up the Start/End Journaling panel shown in Figure 7-30 on page 194. The start and end functions correspond to the OS/400 Start Journaling Physical File (STRJRNPF) and End Journal Physical File Change (ENDJRNPF) commands. Journaling can be started and ended from the item you obtain by right-clicking a table name.  Swap receivers: Clicking this action immediately detaches the currently attached journal receiver and creates a new journal receiver by adding 1 to a sequential number suffix to the journal receiver name. You can also manually swap receivers by using either an option from the Properties action or by using the OS/400 Change Journal (CHGJRN) command.  Permissions: This action lets you view and change the authorities to the journal  Delete: Clicking this action brings up a confirmation window for completing the journal deletion request or canceling it. The journal can only be deleted if all the objects being journaled to the journal have had journaling ended for them.  Properties: This action brings up a panel that shows the original create journal attributes including journal receiver attributes and remote journal attributes, if any. You can also create a new journal receiver or remote journal by using the buttons that lead to additional panels. Figure 7-31 on page 195 shows an example of Journal properties information. We right-clicked the BUPJRN journal and selected Starts and ends table journaling. Figure 7-30 shows the Start/End Journaling display for BUPJRN after we performed some journal-related operations earlier. 194 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-30 Start/End journaling display To start journaling for a file or table, you can select the table and either click the Add button or drag and drop the file name into the list box (1 in Figure 7-30). When all tables you want journaled have been added to the list box, click the OK button. This starts journaling for these files or tables. Alternatively, you could have used the OS/400 Start Journaling Physical File (STRJRNPF) command. Notice the “Journal…” and the “Omit op…” column headings in the list box (1). The “Journal…” heading corresponds to the STRJRNPF command IMAGES (Record images) parameter. The “Omit op…” column heading corresponds to the STRJNPF command OMTJRNE (Omit journal entries) parameter. If you click under the “Journal” or “Omit op” heading to the right of a file or table name, an “X” character appears. If you click again, the “X” disappears. An “X” under Journal means that both before and after record images are written to the journal receiver. If no “X” appears, only an after image is recorded in the journal receiver. An “X” under Omit op… means that file or table open and close actions are not recorded in the receiver. If no “X” appears, all actions on the journaled file or table are recorded in the receiver. In the list box (2 in Figure 7-30), you see the PFREXP/CSTFIL (system naming convention) table is already being journaled at the time the Properties action was selected. You can stop journaling for a file or table by selecting the file or table (listed in 2) and clicking the Remove button and then clicking the OK button. This function corresponds to the OS/400 End Journaling Physical Files (ENDJRNPF) command. Journal Properties example When we right-clicked the BUPJRN journal and selected Properties, the Journal Properties panel appeared as shown in Figure 7-31 on page 195. This shows the original parameters used to create the journal and enables you to make some changes and additions. The Tables button shows you the Start or End journaling panel we already described. 1 2 Chapter 7. Database administration 195 The Receivers button shows you the currently attached receiver and previously detached journal receivers still on the system. You can also add a new journal receiver. The Remote Journals button shows you the current status of a remote journal, if any. You can also add a new remote journal. Figure 7-31 Journal Properties example You can select the Swap receivers box and optionally specify either Continue or Reset to specify the sequence numbering to be used with the new receiver. Then click the OK button to have an immediate detach of the current journal receiver and creation of a new receiver that is immediately attached to the journal. Review online help information (click the Windows ? button and place it on the Swap receivers text; this is the equivalent of context sensitive help on 5250 command screen when you move the cursor to a particular keyword parameter and press F1 for help) to determine if Swap receivers applies to your journaling environment. In this example, we clicked the Receivers button to show you the panel in Figure 7-32. In our example, we have three online, but detached, receivers. The currently attached receiver is BUPJRA0002. 196 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-32 Journal receivers list and properties example By selecting the BUPJRA journal receiver, the lower portion of the panel automatically displays the General properties of this receiver. We already selected the Entries tab information. Select an online detached journal receiver. Click the Delete button to remove the journal receiver and its entries from the system when you no longer need this journaled information. When you click the New button, you see an Add Journal Receiver panel. Clicking the OK button makes any new or delete function permanent. You may also find the journal receiver General, Entries, and Storage information in a separate Properties panel for a specific receiver by performing either of the following actions from the library panel:  Double-clicking the journal receiver object  Right-clicking the journal receiver object and selecting Properties Working on locked rows To gain access to this function, right-click a table. The Locked Rows dialog (Figure 7-33) displays the row number, job, user, job number, current user, status, and lock type for rows that have a row lock placed on them. A row lock is placed on a row when you read a table that is opened for update. While the row lock is in effect, no other job can read the same row for update, which keeps another job from unintentionally deleting the first job's update. Chapter 7. Database administration 197 Figure 7-33 Locked Rows example The Locked Rows panel allows you to perform various tasks:  Check which jobs are locking which rows  View the job log for a job  View an SQL statement that is running or has run in the job  Use the above mentioned SQL Statement with the Run SQL Scripts center  End a job that is listed (provided you have the right authority) Since most of these tasks are rather intuitive, we only document how to link to the SQL Script center to investigate on what is happening in the database. After you start the Locked Rows function, select the job (1 in Figure 7-33) you want to examine. Click the SQL Statement button on the right-hand side of the picture (2) to bring it into the bottom part of the panel (3). At this point, when you click the Edit SQL button (4), the Run SQL Scripts center starts, and the SQL statement is brought into it for you to use. Refer to “Running a single SQL statement” on page 211 for a discussion on how to use this tool. You should also refer to “Linking to the Visual Explain component” on page 212 to see how to use it to conduct database performance analysis. 7.3 Run SQL Scripts The Run SQL Scripts center is a powerful interface to your iSeries database. With it, you can use any SQL statements to issue any kind of operations you are authorized to on the iSeries database objects. Licensed product program 5722-ST1, DB2 Query Manager and SQL Development Kit for iSeries, is not a prerequisite for using Run SQL Scripts. This component of Operations Navigator uses JDBC to access the server. To use SQL from Operations Navigator, right-click the Database component under the iSeries server that contains the data. Figure 7-34 shows the Database context menu with the Run SQL Scripts action highlighted. 2 1 3 4 198 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-34 Run SQL Script Right-click Database to bring up the pull-down menu. Do not click Libraries, because Operations Navigator enables you to potentially access the entire system, rather than limiting you to just the data within a library. Figure 7-35 shows an example of the initial Run SQL Scripts panel. Important: This component has been entirely re-written in V5R1 using Java. It has an enhanced layout and supports these new features as well:  Result data is displayed in the same or a separate window via the Options menu  Run SQL Statement icons  Run SQL statement by double-clicking instead of single-clicking via the Options menu Chapter 7. Database administration 199 Figure 7-35 Run SQL Scripts: Initial input panel The Run SQL Scripts window lets you create, edit, run, and troubleshoot scripts of SQL statements. You can also save the SQL scripts with which you work into a PC file on your PC workstation. There are several run options for the SQL statements that are entered into the SQL statement input area (3). We discuss them later in this section. As shown at 1 in Figure 7-35, you can select to review a list of already provided SQL statements. OS/400 provides a large set of base syntax for almost every possible SQL statement that can be used. You can display the list of existing SQL statements by clicking the down arrow in this area of the panel. You can then select an SQL statement from the list shown and have it inserted into the statement input area (3) by clicking the Insert button (2). You can modify the selected SQL statement or enter your own SQL statement. You can run one or more of your entered your SQL statements in different ways and stop between statements. Before we discuss the run actions, refer to Figure 7-36 to see the different panels within the Run SQL Scripts function. 1 2 3 200 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-36 Run SQL Scripts window pane example The beginning of the list of provided SQL statements is shown at 1. This list was produced by clicking the down arrow (2). In this example, we do not select an SQL statement to be placed into the statement input area (3). However, if we selected one or more SQL statements in the window at 1, the statement or statements would appear in the “SQL statement example” area (4), and you could click the Insert button to place the statements into SQL input area (3). In the SQL statement input area (3), we already entered two simple SQL statements that are partially hidden. We separately ran the following SQL statements: select * from item_fact; select * from cust_dim; Then we viewed the results on a panel (not shown), prior to selecting the list of SQL statements (1). The Run History panel (5) shows you the success and any messages of the SQL statements run. When you select the Edit option from the menu bar, you have the option to clear run history information. Figure 7-37 includes the previous SQL SELECT statements. But, we added SQL statements to illustrate more of the power of DB2 Universal Database for iSeries accessible through Operations Navigator. Figure 7-37 also illustrates some of the run options for the SQL statements we showed under Run SQL Scripts support. 4 2 3 5 1 Chapter 7. Database administration 201 Figure 7-37 Run SQL Scripts: Additional sources Figure 7-37 at 1 shows that we did a select from a table that is in a different OS/400 library (pfrexp) than the libraries included in our job description’s initial library list. We did this by qualifying the table with pfrexp/. The slash separator character (/) is valid because we changed from the default SQL naming convention to the system naming convention. By default, we are running SQL statements on the system to which we are connected. The CONNECT SQL statement used to connect to a remote system as20 (“As20” in our Operations Navigator screen example figures), using OS/400 Distributed Relational Database Architecture (DRDA) over TCP/IP is shown at 2. Assuming this CONNECT statement is successful, all SQL statements thereafter are directed to remote system as20 until an SQL “release all” statement is issued, when the connection returns to access only the local As25 system. OS/400 supports connections to multiple remote systems during the same session. For example, following the statement shown at 4, you can issue a “connect to as05” statement. Assuming this is successful, all the following SQL statements are directed to system As05. You can then issue a “set connection to as20” statement that resets the current dialogue back to system As20. You need to keep track of which system (remote database) you are connected to and on which system you are performing operations. The next statement (3) selects the cust_dim table in the library tpstar01 on the remote system as20. 4 1 3 2 202 Advanced Functions and Administration on DB2 Universal Database for iSeries While we cannot go into the details of DDM/DRDA in this book, we discuss basic setup requirements for the DDM/DRDA example shown here to work over TCP/IP. Refer to 7.4, “Change Query Attributes” on page 217, for more information. A select statement that uses only some of the fields or columns in the cust_dim table and displays only the records or rows where the key field or the CUSTKEY column has a value of 1 or a value of 5 is shown at 4 in Figure 7-37. In a more complex data structure and performance critical environments, you would want to use a combination of the following options:  The Run SQL Scripts option to include query optimizer debug messages in the job log (see 7.3.4, “Run SQL Scripts Run options” on page 210)  SQL Performance Monitor support (see 7.6, “SQL Performance Monitors” on page 220)  Visual Explain (see Chapter 10, “Visual Explain” on page 301) By reviewing the job log, using Visual Explain or going through monitored data, you can determine if the most efficient method is used by OS/400 query support to perform the SQL function. 7.3.1 ODBC and JDBC connection Open Database Connectivity (ODBC) is a standard interface for database connectivity defined by the Microsoft Corporation. ODBC establishes the standard interface to any database as SQL. In general, the ODBC architecture accounts for an application using the ODBC interface, an ODBC Driver Manager, one or more ODBC Drivers, and an ODBC Data Source (place where the data is stored). Java Database Connectivity (JDBC) is an equivalent standard interface for database connectivity from Java applications. Client Access Express provides the iSeries ODBC and JDBC drivers that runs on the PC workstation and the ODBC and JDBC Data Source support that runs on the iSeries server. Production mode job name starts with QZDASOINIT (or QZDASSINIT if SSL is being used). In version 4, with ODBC Data Sources, you can set up a Client Access Express ODBC data source by providing a data source name (a name meaningful to you) and an iSeries server name. Starting in version 5, the setup and administration of Client Access-provided ODBC driver is done by using the standard ODBC data source administrator, provided with the Windows operating system. An ODBC data source consists of the data that the user wants to access and its associated operating system, Database Management System (DBMS), and network platform (if any) used to access the DBMS. Note: DRDA is the IBM-defined architecture for accessing remote databases. It is implemented on all IBM operating systems, and some non-IBM operating system databases support it. At a base set of functions level, it is similar to the ODBC and Java Database Connectivity (JDBC) set of capabilities. On IBM systems, Distributed Data Management (DDM) is a higher level interface to DRDA capabilities. Note: With our examples, each table index (set of key fields or columns) structure is relatively simple, and the number of rows is small relative to a million or more rows that would be present in a data warehouse environment. We also do not have complex join statements (columns joined together from two or more tables). Chapter 7. Database administration 203 Setup information is associated with a data source and may include, for example, data formatting and performance options. Data formatting options include qualified name separators, date and time formats, and data translation. Performance options include when to use record blocking, data compression, or an SQL Package. An SQL package stores previously parsed SQL statements to improve performance when used later. You can also specify if Secure Socket Layer (SSL) is to be used with the ODBC connection. Some client applications (including Operations Navigator) may provide their own unique data source definition. A good source for more information on ODBC support is Client Access Express for Windows, SC41-5509. You can create your own data source to limit the libraries that can be used and, as previously described, your own set of name separators, date and time formats, performance options, and so on. OS/400 provides two data sources that you should understand even if you are not creating your own data source:  A data source used by Operations Navigator itself to perform its functions: This data source is identified by the system name to which you are first connected. For example, if the first system you connect to is called As25, the data source used by Operations Navigator is named QSDN_As25.  A data source is used if you use Database-> Run SQL Scripts: The first time you select the action to Run SQL Scripts to a specific iSeries server, OS/400 creates a JDBC data source for the system (ODBC in V4R5 or previous releases), which can be changed by selecting Connections -> JDBC Setup (Figure 7-38). One JDBC data source is created for each system on which SQL scripts are run. You do not have to create your own JDBC Data Source and understand the data source parameters to run SQL statements against libraries and files or tables to which you are authorized. In 7.3, “Run SQL Scripts” on page 197, we use the default IBM-created data source in our JDBC Data Source Translation parameters. Important: Unless you are an ODBC expert, do not change any of the default settings for this data source. If you change them, Operations Navigator may fail to operate correctly. 204 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-38 JDBC data source panel Server tab Default libraries enables you to change the set of libraries available to the user of this JDBC data sources. The default (*USRLIBL) means to use the initial library list (INLLIBL) parameter specified on the job description for the OS/400 user profile using this JDBC data source. Commit mode controls the level of DB2 Universal Database for iSeries commitment control, including when database changes are considered permanent and whether other users of the same database rows can see column updates that are not yet permanent. A complete description of commitment control is beyond the scope of this redbook. However, you should understand that in the industry, users of SQL typically expect commitment control to be active. That is, an application design determines what a competed transaction (also called a unit of work) is. Any database row changes (column updates, rows deleted, rows inserted) are not considered permanent until a successful transaction has been completed (transaction boundary). At that time, the application performs a commit and all changes are now made permanent. If the application determines that an in-progress transaction should be terminated, it performs a rollback. All changes are as if they had never occurred. If the application abnormally terminates before issuing a commit or rollback, the underlying SQL support performs the rollback. To support commitment control on OS/400, you must also have the tables journaled and the job using these tables must issue a system operation that starts commitment control for the job. This system operation can be invoked by using the OS/400 Start Commitment Control (STRCMTCTL) command or be implicitly invoked by this parameter for values other than *NONE. A commit group refers to the rows that are in the process of being updated, deleted, or inserted. As the help text shows, objects referred to on the COMMENT ON, CREATE, and so forth are also part of this commit group. The commit or rollback applies to all of these rows and objects. Chapter 7. Database administration 205 We include the help text here because the OS/400 default is *NONE, which is not generally supported in the industry. This provides a very flexible operating environment, such as letting other applications or users access the latest database changes. However, *NONE exposes the table rows, even while being processed by the properly authorized Operations Navigator user, to be modified without a required database Commit or Rollback operation sequence to make any database changes permanent. For example, using *NONE means any valid SQL statement that changes column data has made a permanent change to the data. If the properly authorized Operations Navigator user mistakenly updates a column using a wrong value for a key, there is no rollback function available to undo the change to the wrong row. You need either a backup copy of the data or an OS/400 journal to recover the original data. The other commit values specify row locking rules (other applications prevented from updating the same row) and visibility of in-progress changes among applications accessing the same rows. Package tab This tab specifies whether extended dynamic support is enabled. Extended dynamic support provides a mechanism for caching certain dynamic SQL statements on the server. The first time a particular SQL statement is run, it is stored in an SQL package on the server. On subsequent runs of the same SQL statement, the server can skip a significant part of the processing by using information stored in the SQL package. By default, it is not enabled. Performance tab This tab allows you to set performance options. Language tab This tab allows you to specify language options. Other tab The Other tab allows you to set the access type and remarks source options for your connection. Translation tab In most cases, you never need to view or change the JDBC (or ODBC) data source translation parameters. This is because your application tables or files are typically stored as using the Coded Character Set Identifier (CCSID) numeric value that stores the data according to your national language encoding. In these cases, any OS/400 data accessed by the client workstation is translated into the appropriate ASCII format as required for viewing or processing on the client. However, certain OS/400 system files or tables are defined to use the special CCSID 65535. By default, JDBC data source processing does not translate data from a file or table with CCSID 65535. For example, if you want to use Run SQL Scripts against the performance collection files (prefix QAPM...) or a table generated from a virtual private network (VPN) journal (copied to a database file or table), you need to have the character columns translated in most cases. Select the JDBC data source Translate tab and select the Translate CCSID 65535 check box. 206 Advanced Functions and Administration on DB2 Universal Database for iSeries For more information on CCSID support, refer to AS/400 National Language Support, SC41-5101. Format tab There is an important operational difference between using the SQL naming convention and the System naming convention when running SQL statements under Operations Navigator Run SQL Scripts. If you are using the system naming convention and use a non-qualified name, such as a table name with no library qualifier, the system searches for the table within all libraries currently in the session’s (job’s) current library list. If you are using the SQL naming convention, the ANSI standard specification causes the system to look only in the current library within the session’s current library list. For example, assume the user portion of the session’s library list is in the order of TEAM02, followed by library TPSTAR02. Also, assume the unqualified table name is CUST_DIM and is stored in library TPSTAR02. Using the SQL naming convention, the system looks for CUST_DIM only in library TEAM02 and does not find it, which results in an error condition. Using the system naming convention, the system first searches library TEAM02 and then library TPSTAR02. The CUST_DIM table will be found and the SQL statement will run successfully. Format parameters are important if you have a special operating environment, such as your system requiring country specific or multiple country support. You must review the online help text to get the details for all of these parameters. The settings are determined by your requirements. If you want to modify either data source, refer to the online help or consult Client Access Express for Windows, SC41-5509. 7.3.2 Running a CL command under SQL script In addition to running SQL statements under Run SQL Script, Operations Navigator allows the properly authorized user to run any OS/400 Control Language (CL) statement that can be validly run in a batch (no 5250 workstation required) environment. You must precede the OS/400 command syntax with the prefix CL: (uppercase or lowercase) as shown in Figure 7-39. Chapter 7. Database administration 207 Figure 7-39 Run SQL Scripts: Running a CL command The selected CL command is an OS/400 command that submits the job to job queue QBATCH, which is one of the IBM-supplied job queues associated with the IBM-provided subsystem QBATCH. The submit job command parameter (CMD) value can be any OS/400 command or user-defined command. In our example, we used the DSPOBJD command with its own set of parameters. You may also use much simpler OS/400 commands, such as:  Adding a new library to the current library list of the Operations Navigator session using the CL command: CL: ADDLIBLE LIB(PFREXP);  Sending a message to the system operator using the CL command: CL: SNDMSG MSG(’This message is from an Operations Navigator session from user TEAM02.’) TOUSR(*SYSOPR); 208 Advanced Functions and Administration on DB2 Universal Database for iSeries 7.3.3 Run SQL Scripts example using a VPN journal This section shows an example of using Run SQL Script to identify the IP packets, if any, that were denied routing based on OS/400 VPN filtering rules. The standard OS/400 VPN support records permit, deny, and filter rule change occurrences in a system journal named QIPFILTER, stored in library QSYS. The OS/400 Display Journal (DSPJRN) command provides a journal entry time stamp and other compare values to selectively display, print, or copy journal entries to a database file or table. If you choose the database option, you can process the copied journal entries several different ways through SQL. Run SQL Scripts is a good way to experiment with viewing different journal entry field or column data. Once you see a view of the data you want to use repetitively, you can save the SQL statements for later reuse or copy the SQL statements into a program that does further processing or graphical display. This section uses the journal data discussed in the “AS/400 VPN problem determination,” chapter in AS/400 Internet Security: Implementing AS/400 Virtual Private Networks, SG24-5404. We performed the following steps to query the VPN logging data originally placed into the QIPFILTER journal. The query results show the journal entries for packets that have been denied routing, since a large number of deny entries may require further investigation by your security personnel. 1. Create a copy of the IBM-supplied file QSYS/QATOFIPF into a library of your choice, using the OS/400 Create Duplicate Object (CRTDUPOBJ) command, for example: CRTDUPOBJ OBJ(QATOFIPF) FROMLIB(QSYS) OBJTYPE(*FILE) + TOLIB(mylib) NEWOBJ(myfile) Tips for running CL in Run SQL Scripts Running SQL Scripts is a powerful way to test new SQL statements, especially in the sequence you may want to run them in a program. In an actual application environment, you may also want to integrate running system functions through CL commands with your SQL statements. Here are some tips:  Starting with Client Access Express Service Pack 5 (SP5) for V4R4, the following restriction has been removed: For the CL command to be recognized successfully, you must remove (delete) any comment statement, such as: "/* Enter one or more SQL statements separated by semicolons */."  The IBM-supplied SQL statement examples include some CL command examples at the end of the SQL statements.  The key to making the OS/400 command work from an Operations Navigator Run SQL Scripts session is to ensure the objects referenced in the command can be found in the Operations Navigator session’s (job’s) library list or the system library list (system value QSYSLIBL). Adding a library name under the Database->Libraries branch does not carry over to the Run SQL Scripts function. OS/400 commands can always be found through the system value QSYSLIBL. However, objects, such as user-defined commands, may require the appropriate library to be in the Operations Navigator Run SQL Scripts session’s library list. Use Connection -> JDBC Setup to amend the user part of the library list. Chapter 7. Database administration 209 The system file or table QATOFIPF provides the column definitions used by the IBM-supplied queries. In our example, we duplicate this table as ON_IPFTRT. 2. Use the DSPJRN command to copy the journal entries from the QUSRSYS/QIPFILTER journal to the output database file created in the preceding step: DSPJRN JRN(QIPFILTER) JRNCODE(M) ENTTYP(TF) OUTPUT(*OUTFILE) + OUTFILFMT(*TYPE4) OUTFILE(mylib/myfile) ENTDTALEN(*CALC) The DSPJRN command has both starting and ending time-stamp values and starting and ending journal entry sequence numbers so you do not need to copy the entire set of journal entries to the file or table. 3. You need to review the field or column names and descriptions for file or table ON_IPFTRT to determine which columns to select and use for row selection. You may use the OS/400 Display File Field Description (DSPFFD) command or use Operations Navigator to display the table Properties by right-clicking the table name. AS/400 Internet Security: Implementing AS/400 Virtual Private Networks, SG24-5404, provides good background information to help select the appropriate fields or columns. 4. Using Run SQL Scripts, build the SQL statement and view the results. Figure 7-40 shows our example SQL statement and sample output. Figure 7-40 Run SQL Scripts: Viewing ‘denied’ VPN packets The TFACT (filter action) column (1 in Figure 7-40), records values such as PERMIT, DENY, or additional values for adding and changing filter rules and starting and stopping filtering. You also see our SQL compare value for ‘DENY’. You can see that we did not want to look at all (13,000) journal entries, so we started around the middle of the entries with journal entry sequence number 6300 (2 in Figure 7-40). 1 3 4 1 2 210 Advanced Functions and Administration on DB2 Universal Database for iSeries The TFPDIR (packet direction) column specifies “O” for output packet and “I” for input packet. Using the source IP address and port number (3) and the destination IP address and port number (4), a TCP/IP expert can determine the actual workstation and TCP/IP function. A TCP/IP expert may also choose different columns to include in the SQL SELECT statement. 7.3.4 Run SQL Scripts Run options This section explains the Run options available for these SQL statements. We use Figure 7-41 as a basis for explaining the run options. Figure 7-41 Run SQL Script: Run options There are two “selection lists” types from which you can choose to run one or more SQL statements at a single time. You can select the Run option (1) from the Run SQL Scripts menu bar or select one of the green arrow or hour glass Run action icons (2) from the toolbar. These have corresponding functions. You can also select the Run option with a key sequence as shown under the Run pull-down menu. You can pre-specify (defaults are provided) some controls over the Run function through the Options action in the menu bar (6). We discuss these controls in “Controlling SQL run options” on page 213 after we explain the three levels of run options:  Running a single SQL statement  Running a set of SQL statements  Running all SQL statements currently specified 3 B C A 4 1 2 5 D E F 123 D E G 6 Chapter 7. Database administration 211 Running a single SQL statement Place the active screen cursor within the SQL statement text you want to run, for example: select * from pfrexp/cstfil; This is referenced as 4 in Figure 7-41. You can run only this statement by using one of the following actions:  Click the Selected action (C).  Click the “select one line” or “select one line hour glass” icon associated with C in our example in Figure 7-41.  Press Ctrl+Y from the workstation keyboard. Only the single statement will run. If it is a SELECT statement, the results are presented as a window on your Operations Navigator workstation. The column names are presented as column headings. If you want to select only a subset of columns later, you can use these headings and displayed column data to help you select the appropriate columns. Figure 7-42 shows some of the column headings and associated data for the pfrexp/cstfil table. Figure 7-42 Run SQL Script: Sample SQL SELECT output Running a set of SQL statements You can run a set of SQL statements that are currently active in your Operations Navigator session to the iSeries server. Using our example in Figure 7-41, you would run: select * from pfrexp/cstfil; 4 through SELECT CUSTKEY ... IN(1,5); 5 You do this by placing the active screen cursor within the SQL statement text (4) and performing one of the following actions:  Click the From Selected action (B).  Click the From Selected icon (the middle down arrow or the middle hour glass) associated with 2 in our example in Figure 7-41.  Press Ctrl+T from the workstation keyboard. This runs each statement sequentially, beginning with: select * from pfrexp/cstfil; 212 Advanced Functions and Administration on DB2 Universal Database for iSeries We have three SELECT statements in our example. For each SELECT statement, a window of data is presented; all three windows are produced. However, if the SELECTs are fast enough, you may notice only the last SELECT output. The three windows are active on the screen and can be viewed by selecting the appropriate task from the windows task bar, typically at the bottom of a window. If an error occurs and a Stop on error option is selected (as specified under the Options pull-down menu (6 in Figure 7-41), the program stops and the statement where the error occurred remains selected. The statement is ready to be run after it is corrected. Running all SQL statements currently active You can run sequentially all the SQL statements that are currently active in your session to the iSeries server. Using our example, this would start with select * from cust_dim; 3 through SELECT CUSTKEY ... IN(1,5); 5 You run all the SQL statements by doing one of the following tasks:  Click the All action (A).  Click the All icon (leftmost down arrow or leftmost hour glass) associated with 1 in our example in Figure 7-41.  Press Ctrl+R from the workstation keyboard. If an error occurs and a Stop on error option is selected (as specified under the Options pull-down menu (6 in Figure 7-41), the program stops, and the statement where the error occurred remains selected. SQL statement syntax check Using this option (G in Figure 7-41), it is possible to validate a selected SQL statements or statements. This function performs a formal syntax check of the statement, while validating that the objects referenced (libraries, tables, columns) actually exist in the linked database. Resulting messages appear in the result panel. This option can also be invoked by pressing Ctrl+K after selecting an SQL statement. Linking to the Visual Explain component In V4R5, two more icons (D and E in Figure 7-41) were added to the Run SQL Script tool bar. These icons provide access to the Visual Explain function, as do the two new menu items (D and E) under Visual Explain. For more information, refer to Chapter 10, “Visual Explain” on page 301. The Explain option (D), or using Ctrl+E, allows you to review the Optimizer access plan that will be used when executing an SQL statement; the statement is not actually run but optimized with the query attribute Time Limit set to 0. For details on query attributes, see 7.4, “Change Query Attributes” on page 217. It produces a visual explanation of the statement but does not access the actual data from the database, therefore avoiding the unnecessary I/O load. The Run and Explain option (E), or using Ctrl+U, runs the SQL statement and gathers execution time statistics from the statement. It uses the actual access plan from the statement and the statistics and presents these in a graphical format. With this option, the statements are executed before the analysis graph is reported. Chapter 7. Database administration 213 Linking to the SQL Performance Monitor component Using the Recent SQL Performance Monitors option (F) under Visual Explain in Figure 7-41 on page 210, you can obtain a list of the most recent SQL Performance Monitor collections and can then link into the tool to analyze collected data. See 7.6, “SQL Performance Monitors” on page 220, for a discussion on the characteristics and usage of this tool. Controlling SQL run options By selecting Options from the Run SQL Scripts menu bar (6 in Figure 7-41 on page 210), you can control what to do if an SQL error occurs and what levels of additional information should be included in your session to the iSeries server:  Stop on Error: This turns stopping on or off when there is more than one SQL statement to run and an error occurs. If it is turned on (default), the SQL statements are stopped at the SQL statement in error, which remains selected. If it is turned off, all SQL statements continue to run until the end of the script has completed.  Smart Statement Selection: This turns on or off treating the selected SQL statement as a complete statement or attempting to run only the selected text. If it is turned on (default), the complete statement, up to the ending semi-colon (;) character, is attempted. If it is turned off, only the selected text is attempted. If you attempt to run only a portion of the original statement, the statement may complete successfully. However, you are subject to at least two error conditions: – Omitting some text may make the SQL statement fail, because the statement is incomplete. – Omitting some text may still result in successful completion. However, if the JDBC data source used for your session is set to *NONE for commitment control, omitting a phrase an UPDATE statement, such as WHERE CUSTKEY = 1, may update all the rows in the table, which is not what was intended. See 7.3.1, “ODBC and JDBC connection” on page 202, for additional information about commitment control. The most complete OS/400 documentation on commitment control is in Backup and Recovery, SC41-5304.  Include Error Message Help in Run History: This turns on or off (default) the inclusion of additional error message information in the Run History pane when an error occurs. Figure 7-43 shows an example where we specified an invalid column name (WRONGCOL) for the table. Note: This option is no longer available in Operations Navigator Run SQL Scripts in V5R1. Detailed error messages are always displayed in the messages tab on bottom frame. 214 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-43 Run SQL Scripts: Include Error Message Help in Run History  Include Debug Messages in Job Log: This option tells the OS/400 query optimizer support to record its decisions on how to process the SQL request, including any recommendation for creating an index that may improve performance. The option is typically used only when debugging new and complex SQL statements or while analyzing a suspected performance problem. Analyzing the job log messages may be sufficient to determine if a performance problem exists and what action should be taken to resolve the problem. You may also consider using the Operations Navigator interface to the SQL Performance Monitor, which is described in 7.6, “SQL Performance Monitors” on page 220, and Visual Explain, described in Chapter 10, “Visual Explain” on page 301. Figure 7-44 shows an example of an SQL JOIN statement and the associated job log messages that should be reviewed. Chapter 7. Database administration 215 Figure 7-44 Run SQL Scripts: Include Debug Messages in Job Log We use the selected SQL SELECT with JOIN statement (1) to show the associated job log debug messages issued by the query optimizer. To see the current Operations Navigator session’s job log, complete these tasks: a. Click View in the Run SQL Scripts panel. b. Click Job Log (2). In our example job log, we discuss two messages: the optimizer’s suggestion for an access path (index) to file ITMFIL with message ID CPI432F (3) and error message CPI433A (4). By double-clicking message CPI432F, the message details or “second-level text” is displayed. The message text describes why the create index function is recommended and the recommended column names to include in the new index. Message CPI433A may appear multiple times in the job log of a job that has run several SQL statements. Each time an SQL statement is run, the system looks for a file or table by the name of QAQQINI in the QUSRSYS library. This table can be set up by you to specify query attributes that the OS/400 query optimizer will use while processing each SQL statement. If you are not attempting to modify the default OS/400 query processing algorithm through this table, the table will not be in the QUSRSYS library, and this message is considered for information only. 3 4 2 1 216 Advanced Functions and Administration on DB2 Universal Database for iSeries  Run Statement On Double-Click: This option has been added in V5R1. When it is turned on, it allows the running of a SQL statement by double-clicking the SQL statement.  Change Query Attributes: This allows you to easily modify the query options file QAQQINI for your job, provided you remember the job number previously checked in the job log, or for any other job in the system. This is done using the same interface as documented in 7.4, “Change Query Attributes” on page 217. 7.3.5 DDM/DRDA Run SQL Script configuration summary Using Figure 7-37 on page 201, at 2, we showed and discussed an SQL CONNECT statement (“connect to as20;”) to access data on a remote system. For ease of reference, this statement also appears in Figure 7-44. This section provides overview information on configuration parameters required to successfully access remote data. For DRDA to work between a source system (function requester) and target system (request server) where the actual data is and the SQL function is performed, you need a certain DRDA configuration to be set up correctly. The following steps summarize the configuration required (using As25 as the source or requester system and As20 as the target or server system). On the As25 (source system), complete the following steps: 1. Start TCP/IP. 2. Enter the OS/400 Add Relational Database Directory Entry (ADDRDBDIRE) command: ADDRDBDIRE RDB(AS20) RMTLOCNAME(AS20 *IP) TEXT('Remote DB system via TCP/IP’) This relational database entry identifies a database name (RDB parameter), the remote system name, and that the connection is over TCP/IP. TCP/IP must be active on both the source and target systems. A Domain Name Services (DNS) server must be active in the network to resolve to the actual IP address. Note that DRDA runs over SNA connections as well as TCP/IP. 3. Enter the Add Server Authentication Entry (ADDSVRAUTE) command: ADDSVRAUTE USRPRF(TEAM02) SERVER(AS20) PASSWORD(T02EAM) The SQL CONNECT TO target system (remote server)-database statement can explicitly specify USER (user ID) and USING (password) information. If it does not, the user ID and password information specified in the ADDSVRAUTE command are passed to the remote server. Depending on the target system’s (remote server) security requirements for clients to connect to it, a user ID and, optionally, a user password are required that must be successfully validated on the remote server. We strongly recommend that you enter the user profile, server name, and password values in uppercase. 4. To specify a password value for the ADDSVRAUTE command’s PASSWORD parameter, the source system Retain server security data (QRETSVRSEC) system value must be set to 1. On the As20 target (remote server) system, follow these steps: 1. Start TCP/IP. 2. Start the TCP/IP DDM server. Note: To use ADDSVRAUTE support, your user profile must specify *SECADM special authority. You must also have *OBJMGT and *USE authorities to the user profile specified on this command. Chapter 7. Database administration 217 The DDM server jobs run in subsystem QSYSWRK. The jobs are named QRWTLSTN (daemon) and QRWTSRVR (server, one per connection). The network attributes’ DDM/DRDA Request (DDMACC) parameter for processing received DDM/DRDA requests is set to *OBJAUT. This means normal OS/400 processing user profile authority to the requested file or table is performed. This target or server system can be configured to not require a password from the source system. You do this by using the OS/400 Configure TCP (CFGTCP) command interface. Then select Configure TCP/IP applications->Change DDM TCP/IP Attributes. 3. A target system user ID and password must correspond to the user ID and password, if used, received from the requesting source system. 7.4 Change Query Attributes Using the Change Query Attributes item that becomes available when you right-click Database gives you an easy way to change your query options for accessing the database. However, you must be aware that some of the options that are available here can be manipulated using the Change Query Attribute (CHGQRYA) CL command on the iSeries server. There is not an exact one-to-one correspondence. For a detailed discussion on the implications of changing query attributes, refer to the manual DB2 UDB for iSeries Database Performance and Query Optimization in the iSeries Information Center. You can access the Change Query Attributes panel in two ways:  From Operations Navigator, right-click Database and select Change Query Attributes.  From the Run SQL Scripts center, select Options and Change Query Attributes. You then see the Change Query Attributes dialog as shown in Figure 7-45. Figure 7-45 Change Query Attributes panel Now proceed with the following steps: 1. In the upper part of the window (1), you see a list of all jobs currently active in the system. Scroll through the list to locate the job you are interested in, click on its name, and use the 1 3 4 5 2 218 Advanced Functions and Administration on DB2 Universal Database for iSeries Select button (2) to move the selection in the bottom part of the panel (3). You can select more than one job and set common query attributes for all of them at the same time. 2. At this point, you can specify a library (4) in which you want the original QAQQINI file to be copied. Click the Open Attribute button (5), which allows you to edit the copy of QAQQINI you just made in the library (4), as shown in Figure 7-46. Figure 7-46 Editing QAQQINI 3. Click the cell you want to change and type the new value. As shown in Figure 7-46, we change the setting for MESSAGES_DEBUG from the original value *DEFAULT to *YES, therefore, stating that we want debug messages to be recorded for the selected job. When you press Enter to activate your changes, you receive a warning message (Figure 7-47). 4. Click Yes. Figure 7-47 Warning message on modification of the QAQQINI file 5. You are brought back to the Change Query Attributes panel. Click OK to make the change effective. The options that are currently managed in the QAQQINI file and their values are documented in DB2 UDB for iSeries Database Performance and Query Optimization. 7.5 Current SQL for a job You can use this function to select any job running on the system and display the current SQL statement being run, if any. Besides displaying the last SQL statement being run, you can edit and rerun it through the automatically linked Run SQL Scripts option and display the actual job log for the selected job or, even end the job. This can also be used for database usage and performance analysis, with the Visual Explain tool documented in Chapter 10, “Visual Explain” on page 301. To start it, right-click the Database item in Operations Navigator and select Current SQL for a Job. You are presented with the dialog shown in Figure 7-48. Chapter 7. Database administration 219 Figure 7-48 Current SQL for a Job The Current SQL window displays the name, user, job number, job subsystem, and current user for the available jobs on your system. You can select a job and display its job log, the SQL statement currently being run, if any, decide to reuse this statement in the Run SQL Scripts center, or even end the job, provided you have sufficient authority. In our example, we selected an ODBC job (1) and displayed the last SQL statement it ran in the bottom part of the panel (2) using the SQL Statement button (3). To go to its job log, we would use the Job Log button (4). After the SQL statement is brought in the bottom part of the panel, it is possible to use the Edit SQL button (5) to work on this same statement with the Run SQL Scripts center that was previously documented in this redbook. See Figure 7-49. Figure 7-49 Working with current SQL for a job From here, it is also possible to link into Visual Explain, using the appropriate menu item or the icons (1) to help you with database performance analyses. For a discussion on this tool, refer to Chapter 10, “Visual Explain” on page 301. As you may have already noticed, all Operations Navigator database tools are tightly integrated into each other to make it easier for the user to fully exploit their capabilities. 3 2 5 1 4 1 220 Advanced Functions and Administration on DB2 Universal Database for iSeries 7.6 SQL Performance Monitors You can analyze the performance of iSeries SQL statements by putting the appropriate OS/400 job into debug mode, running the SQL statements, and viewing the query optimizer messages in the job log. You can see an example of using job log messages in “Controlling SQL run options” on page 213. This section describes a more powerful SQL performance analysis tool that initially appeared in V4R4 Operations Navigator and was further enhanced in V4R5. This support provides a graphical interface to IBM-provided SQL queries against data collected by the Memory Resident Database Monitor that was introduced in V4R3. In addition to output equivalent to the debug mode optimizer messages, this monitor can monitor multiple jobs and show the actual SQL statement. This interface is referred to as the SQL Performance Monitors. The SQL Performance Monitor, which was originally available in V4R4, only allowed gathering summary performance information from the Memory Resident Database Monitor. In V4R5, it is possible to enhance the usability of this interface by collecting detailed performance information. For a detailed discussion on the Memory Resident Database Monitor, refer to DB2 UDB for iSeries Database Performance and Query Optimization. Before you start an SQL Performance Monitor, you need to determine which job or jobs you want to monitor. There are several techniques you can use to determine the job. We list some of them here:  If you are using SQL statements running Operations Navigator Database-> Run SQL Scripts, you can click the View option from the menu bar. On the drop-down menu that appears, click Job Log to see your current job’s job log. Included in the gray header portion of the job log messages is the name of the job, for example, 139224/QUSER/QZDASOINIT. You can scan down to the earliest job log messages to confirm this job is actually running under the user profile you think it should be.  If you are not running the job that needs to be monitored, you can get the job name from the user of the job, if possible.  If you know the user profile running the SQL jobs, but do not know which job is the one you want to monitor, you can use the OS/400 Work with Object Locks (WRKOBJLCK) command to find the jobs running with that user profile. You may receive more jobs than you anticipated. Then, you may need to look in the job logs of each job for some SQL-like messages to determine which job or jobs to monitor, for example: WRKOBJLCK OBJ(QSYS/TEAM02) OBJTYPE(*USRPRF) MBR(*NONE) This command resulted in five jobs running with user profile TEAM02: one job name starting with QPADEV000L (5250 emulation), two jobs running Client Access Express database serving with job name starting with QZDASOINIT (not using SSL), and two jobs with the job name starting with QZRCSRVS (central server functions). We looked in the job logs for the two QZDASOINIT jobs and in one of them found the message: 148 rows fetched from cursor CRSR0002. This QZDASOINIT job was set by Operations Navigator Run SQL Scripts to Include debug messages in a job log.  You can use the Operations Navigator server jobs interface to find the job by selecting from Operations Navigator Network->Servers-> Client Access. Then right-click Database and select Server Jobs to view the Client Access Express servers (circled in Figure 7-50). Chapter 7. Database administration 221 Figure 7-50 Finding the database server job (Part 1 of 2) When you click Server Jobs, a window appears similar to the one shown in Figure 7-51. This display shows the database server jobs, QZDASOINIT (not using SSL), that are currently started and shows a current user ID for jobs currently doing active database functions. Figure 7-51 Finding the database server job (Part 2 of 2) This display illustrates an advantage of using the Operations Navigator “servers” support to find a job, compared to using OS/400 5250-display based commands such as the Work with Subsystem Jobs (WRKSBSJOB), Work with Active Jobs (WRKACTJOB), or Work with Object Locks (WRKOBJLCK) commands. 222 Advanced Functions and Administration on DB2 Universal Database for iSeries The Operations Navigator interface lists the jobs based on their function. With the OS/400 commands, you need to understand what OS/400 subsystem the server jobs run in and the job name that identifies the server function. In our example, you need to know that the QZDASOINIT jobs do the database serving (in this case ODBC-based) work. You also need to look into the job logs of each active job to find the actual user ID (profile) using the job. The OS/400 commands we discussed show equivalent jobs with the user ID as QUSER. QUSER is the user profile assigned by the system for pre-started Client Access database server jobs. The user profile name actually using the job is indicated in a job log message. The Operations Navigator interface examines the job log messages and shows the active user profile (TEAM02, in our example) if the pre-started job is currently in session with a signed on client. 7.6.1 Starting the SQL Performance Monitor To run an SQL Performance Monitor, you need to: 1. Define a new monitor. 2. Determine whether it’s going to be a Detailed collection or a Summary collection. 3. Specify the jobs to be monitored and the data to be collected for a Summary collection. The Detailed collection is discussed later in “Detailed SQL Performance Monitor example” on page 226. Summary SQL Performance Monitor example To start the SQL monitoring process, follow these steps: 1. Right-click SQL Performance Monitors, and select New as shown in Figure 7-52. Figure 7-52 SQL Performance Monitor (Part 1 of 5) 2. Select Summary. This brings up the New SQL Performance Monitor dialogue panel with three tabs: General, Monitored Jobs, and Data to Collect. Chapter 7. Database administration 223 The General tab is shown in Figure 7-53. Figure 7-53 Starting a Summary SQL Performance Monitor (Part 2 of 5) We entered the monitor name, the library name that is used to contain the collected data, and the amount of main storage allocated to the monitoring process. Do not click the OK button yet. Monitoring all jobs will start if you have not selected specific jobs under the Monitored Jobs tab. Monitoring all jobs is not recommended on a system with hundreds of active jobs because the monitoring process can degrade performance. 3. To specify which OS/400 jobs to manage, click the Monitor Jobs tab, which brings up the panel shown in Figure 7-54. Figure 7-54 Starting a Summary SQL Performance Monitor (Part 3 of 5) 2 1 224 Advanced Functions and Administration on DB2 Universal Database for iSeries 4. You can select to monitor all jobs or to select jobs from the Available jobs list pane (1 in Figure 7-54). After you select a job and click the Select button, the job information is entered into the Selected jobs list pane (2 in Figure 7-54). You remove selected jobs by selecting a job in the Selected jobs pane and clicking the Remove button. In this example, we scrolled down the active job names to display the ones shown in 1. We select to monitor only job QZDASOINIT/QUSER/023247 with PORTERL as the current user. We recommend that you monitor as few jobs as possible, because monitoring a large number of active jobs could impact normal productivity. 5. When you are finished selecting jobs, click the Data to Collect tab. This brings up the panel shown in Figure 7-55. Figure 7-55 Starting a Summary SQL Performance Monitor (Part 4 of 5) 6. This panel shows three sets of SQL monitor data collected during every monitor collection period. You can specifically include other sets of data or simply click the Select All button. You should select all, unless you understand the application implementation in detail so that you need to collect only specific information. When you are satisfied with your monitor collection specification, click the OK button to return to the original SQL Performance Monitor window, which shows the monitor status on the right pane in Figure 7-56. Chapter 7. Database administration 225 Figure 7-56 Starting a Summary SQL Performance Monitor (Part 5 of 5) In our example, we used Run SQL Scripts to run the SQL statement. This statement has a relatively complex WHERE clause as shown in Figure 7-57. Run SQL Scripts is discussed in more detail in 7.3, “Run SQL Scripts” on page 197. Figure 7-57 SQL Performance Monitor: SQL Statement which was monitored Operations Navigator Run SQL Scripts support uses JDBC support. In our example figure, the SQL statement was already run based on the evidence of its appearance within the Run History pane. The message Opening results viewer... indicates the results of the SQL select statement has already been displayed to the Operations Navigator user. 226 Advanced Functions and Administration on DB2 Universal Database for iSeries The SQL Performance Monitor can monitor all SQL work performed on OS/400. In addition to Operations Navigator Run SQL Scripts jobs, other users of OS/400 SQL support would include a client workstation Visual Basic program accessing the OS/400 via ODBC, a client workstation Java applet accessing the OS/400 via Java Database Connectivity (JDBC), a local iSeries program using embedded SQL in the RPG, COBOL, or C program, a local iSeries program using the SQL CLI (Call Level Interface) in RPG, COBOL, C, or Java. OS/400 also has a 5250 workstation-based SQL interface running under the Start SQL (STRSQL) command. Detailed SQL Performance Monitor example To start the SQL monitoring process, right-click SQL Performance Monitors, and select New. Select Detailed as shown in Figure 7-58. Figure 7-58 Starting a Detailed SQL Performance Monitor Name the monitor. Select a library for the collected data and proceed to select the jobs you want to monitor as previously documented in “Summary SQL Performance Monitor example” on page 222. 7.6.2 Reviewing the SQL Performance Monitor results The SQL Performance Monitor statistics are kept in main storage for fast recording, but need to be written to database files to use the Operations Navigator interface to review the results. You can have the statistics written to database files by either pausing or ending the monitor. Right-click the active SQL Performance Monitor. A pop-up window appears that lists the Pause, End, and other monitor actions as shown in Figure 7-59. Chapter 7. Database administration 227 Figure 7-59 Managing the SQL Performance Monitor The possible managing functions are:  Pause: This stops the current collection of statistics and writes the current statistics into several database files or tables that can be queried by selecting the Analyze Results action. The monitor remains ready to collect more statistics, but requires the Continue action to restart collection.  Continue: This restarts the collection of statistics for a monitor that is currently paused.  End: This stops and ends the monitor and writes the current collection of statistics to the database files or tables.  Analyze Results: This brings up a window with three tabs for selecting ways to look at (query) the collected statistics in the database files or tables: – Summary Results – Detailed Results – Composite View  List Explainable Statements: This opens a dialog listing the SQL statements for which the detailed SQL Performance Monitor has collected data and for which a Visual Explain diagram can be produced. See “Listing Explainable Statements” on page 231 for an example.  Properties: This brings up a window with three tabs that represent the original monitor definition: – General – Monitored Jobs – Saved Data An example of the Saved Data tab with the details for our monitor is shown in Figure 7-60. 228 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-60 SQL Performance Monitor: Properties The SQL Performance Monitor file name numeric suffix is updated when each monitor is started. Analyzing a summary of SQL Performance Monitor results OS/400 provides many pre-defined queries to view the recorded statistics. You can select these queries by checking the various query types on the Analyze Results panels. To begin viewing the results, right-click the paused or ended monitor. Select Analyze Results from the pop-up window. Here we analyze results for a Summary Monitor. Figure 7-61 shows the first results panel that groups queries according to three tabs:  Summary Results  Detailed Results  Composite View Chapter 7. Database administration 229 Figure 7-61 SQL Performance Monitor: Analyze Results - Summary results You can select individual queries or use the Select All button. After you select the queries you want to run, click the View Results button. You can even choose to modify the pre-defined queries and run the new queries by clicking the Modify Selected Queries button. An in-depth discussion of using the SQL Performance Monitor results to improve performance is beyond the scope of this redbook. However, we do show sample query results output. To obtain the query results shown in Figure 7-63, you must first select the Detailed Results tab on the Performance Monitor Results window shown in Figure 7-61. This brings up the Detailed Results panel shown in Figure 7-62. 230 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-62 SQL Performance Monitor: Detailed Results You can select individual detail query reports, select all queries, and even modify the provided queries. When you are finished selecting the queries you want, click the View Results button. The OS/400 query optimizer support includes an Index Advisor function. This support includes, when appropriate, a recommendation that a new index should yield improved performance. Columns that should be used in the index are listed. To view this detailed information, you must first select to view Arrival Sequence Information (1 in Figure 7-63). Click the View Results button to access a panel similar to the one shown in Figure 7-64. Figure 7-63 SQL Performance Monitor: Table Scan Information To view the information in Figure 7-63, we had to scroll to the right to find the columns Advised Index and Advised Index Keys (1). You can see that we compressed several columns in the results to make the index path information fit within the window (2). A lab exercise can be downloaded to your iSeries server on a PC workstation as listed in 7.1.1, “New in V5R1” on page 159. The “Self study lab” can be used to familiarize yourself with the power of the SQL Performance Monitor, as well as most of the Operations Navigator Database support. It also includes tips on tuning SQL performance. Analyzing a detailed SQL Performance Monitor Most of the discussion in “Analyzing a summary of SQL Performance Monitor results” on page 228 also applies to a detailed monitor. 2 1 Chapter 7. Database administration 231 Figure 7-64 shows the first results panel that groups queries according to three tabs:  Summary Results  Detailed Results  Extended Detailed Results Figure 7-64 Detailed SQL Performance Monitor: Analyze Results - Detailed Monitor For each of these options, you can run any of the pre-prepared queries or modify them for your own analysis. Although the items listed under the Detailed and Extended Detailed Results tabs have the same names and descriptions, the underlying queries are different. The Extended ones allow you a more complete understanding of the Optimizer choices. You can easily verify this by selecting the same item in both lists and clicking the Modify Selected Query button to have the SQL statement opened with the Run SQL Script center. Listing Explainable Statements The Explainable Statements for SQL Performance Monitor dialog lists the SQL statements for which an SQL Performance Monitor has collected detailed data and for which a Visual Explain graph can be produced. To access this function, click Operations Navigator->Database->SQL Performance Monitors. Then you see a list of the SQL Performance monitors that are currently on the system. Right-click a detailed SQL Performance Monitors collection and select List Explainable Statements, as shown in Figure 7-65. 232 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-65 List Explainable Statements When you select this item, you see the panel in Figure 7-66. The upper half of the panel displays the SQL statements monitored during the data collection session. Click to select the statement (1) you are interested in analyzing. The selected statement appears in the lower half of the dialog. Once the statement is in focus, it is possible to have it analyzed and explained. Click the Run Visual Explain button (2). Refer to Chapter 10, “Visual Explain” on page 301, for a detailed discussion on this tool. Figure 7-66 Using List Explainable Statements As you may have already noticed, the set of database analysis options and tools provided by Operations Navigator are well interconnected and meant to be used together in an iterative fashion. 2 1 Chapter 7. Database administration 233 7.6.3 Importing data collected with Database Monitor It is possible to import Database performance data collected with the more traditional green-screen interface tool, known as Database Monitor. You can start Database Monitor by using either the STRDBMON or STRPFRMON STRDBMON(*YES) CL commands on the iSeries server. While it is beyond the purpose of this redbook to discuss the traditional collection methods, you will certainly be pleased to discover that this data can be analyzed with the simpler and more intuitive instruments made available with Operations Navigator, rather than with the traditional approach. To import data collected with Database Monitor, select Database->SQL Performance Monitor->Import as shown in Figure 7-67. Figure 7-67 Importing data in SQL Performance Monitor (Part 1 of 2) A panel appears like the example in Figure 7-68. On this display, you give a name to new monitor and specify the library and file containing the collected data. Figure 7-68 Importing data in SQL Performance Monitor (Part 2 of 2) 234 Advanced Functions and Administration on DB2 Universal Database for iSeries Due to some new fields being added to the Database Monitor file in V5R1, SQL Performance Monitor only fully supports importing and analyzing database performance data collected in V5R1. Data collected from earlier releases will not have all of the information needed by Visual Explain. The system imports data from earlier releases and converts the data to a V5R1 format. However, it can only use default values for information that was not recorded at the earlier release, and full results cannot be guaranteed when using Visual Explain. Please refer to 7.6.2, “Reviewing the SQL Performance Monitor results” on page 226, for a discussion on how to analyze the collected data. Exit the Visual Explain window and the Explainable Statements window after you complete your analysis. Depending on your future needs for further investigation, you may either retain the performance data or delete it from the system at this time. To delete an SQL Performance Monitor collection, right-click the data collection you are interested in, and select Delete. Importing performance data for Query/400 Query/400 is not included in the list of queries that can be monitored by the SQL Performance Monitor, even though debug messages can be used with Query/400 queries. Because Query/400 queries are often blamed for poor performance, and sometimes even banned from execution during daylight hours, it was thought appropriate to provide guidance to bring Query/400 queries into the scope of Visual Explain. There is no direct Query/400 to SQL command. However, the STRQMQRY CL command will run a query definition (object type *QRYDFN) as an SQL statement, as long as the parameter ALWQRYDFN is set to either *YES or *ONLY. To use this SQL statement with Visual Explain, either start an SQL Performance Monitor for this job before you issue the STRQMQRY command or use the native STRDBMON CL command to collect detailed data for the job. See 7.6.1, “Starting the SQL Performance Monitor” on page 222, for further information. Alternatively, you can access the SQL statement by using the Current SQL for a Job option (obtained by right-clicking the database icon in Operations Navigator). Here we document the actions you need to follow to import the database performance data collected for Query/400 using the STRQMQRY command as a workaround. For documentation on the differences between STRQRY and STRQMQRY, see: http://publib.boulder.ibm.com/pubs/html/as400/v5r1/ic2924/info/index.htm Use STRQMQRY as a search word. In our example, we collect database performance data on a pre-existing Query/400 definition, named TESTQRY in library ITSCID41 using Database Monitor on the iSeries server. We are going to create an SQL Performance Monitor named QRYIMPORT. We perform the following steps: 1. Start the traditional green-screen tool to collect database performance data, Database Monitor, into a file named QAQQDBMN in library ITSCD41, using the STRDBMON CL command: STRDBMON OUTFILE(ITSCID41/QAQQDBMN) TYPE(*DETAIL) COMMENT('Collecting Data for Query/400 - to be Imported in SQL Monitor') See Figure 7-69 for a sample of this command. All traditional 5250 commands and activities documented here have been performed in the same working session. Had it been otherwise, the STRDBMON command should point to the correct job (STRDBMON JOB(nnnnnn/USER/JOBNAME)). Chapter 7. Database administration 235 Figure 7-69 Start Database Monitor 2. Now that we are recording all the database activities for our current job, we can run the previously prepared Query/400 definition using Query Management Query, as shown in Figure 7-70. Figure 7-70 Start Query Management Query Start Database Monitor (STRDBMON) Type choices, press Enter. File to receive output . . . . . > QAQQDBMN Name Library . . . . . . . . . . . > ITSCID41 Name, *LIBL, *CURLIB Output member options: Member to receive output . . . *FIRST Name, *FIRST Replace or add records . . . . *REPLACE *REPLACE, *ADD Job name . . . . . . . . . . . . * Name, *, *ALL User . . . . . . . . . . . . . Name Number . . . . . . . . . . . . 000000-999999 Type of records . . . . . . . . > *DETAIL *SUMMARY, *DETAIL Force record write . . . . . . . *CALC 0-32767, *CALC Comment . . . . . . . . . . . . > Collecting Data for Query?400 - to be Imported in SQL Monitor Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Start Query Management Query (STRQMQRY) Type choices, press Enter. Query management query . . . . . > TESTQRY Name Library . . . . . . . . . . . > ITSCID41 Name, *LIBL, *CURLIB Output . . . . . . . . . . . . . * *, *PRINT, *OUTFILE Query management report form . . *SYSDFT Name, *SYSDFT, *QMQRY Library . . . . . . . . . . . Name, *LIBL, *CURLIB Additional Parameters Relational database . . . . . . *NONE Connection Method . . . . . . . *DUW *DUW, *RUW User . . . . . . . . . . . . . . *CURRENT Name, *CURRENT Password . . . . . . . . . . . . Character value, *NONE Naming convention . . . . . . . *SYS *SYS, *SAA Allow information from QRYDFN . > *YES *NO, *YES, *ONLY More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 236 Advanced Functions and Administration on DB2 Universal Database for iSeries Use the STRQMQRYCL command with the ALLQRYDFN parameter set to *YES to enable it to exploit the pre-existing query definition: STRQMQRY QMQRY(ITSCID41/TESTQRY) ALWQRYDFN(*YES) 3. After the query has finished executing, you notice message QWM2704 posted into your job log, as shown in Figure 7-71. This is an informational message documenting that no query management object was found. However, a query definition having the same name existed and it was used, as allowed by the ALLQRYDFN(*YES) parameter. Figure 7-71 Message QWM2704 4. At this point, if you are finished collecting data, you can end the Database Monitor by using the CL command: ENDDBMON 5. Now you can use the SQL Performance Monitor tool in Operations Navigator to import and analyze the data collected. To do so, select Database->SQL Performance Monitor. Right-click SQL Performance Monitor and select Import. You see the dialog shown in Figure 7-72. Additional Message Information Message ID . . . . . . : QWM2704 Severity . . . . . . . : 00 Message type . . . . . : Diagnostic Date sent . . . . . . : 10/18/00 Time sent . . . . . . : 16:14:21 Message . . . . : STRQMQRY command completed using derived information. Cause . . . . . : Information was derived from a Query/400 query definition and used instead of at least one query management object that could not be found. The Query/400 query definition was found using the names specified for locating the query management object. Recovery . . . : Refer to the job log for the name and object type specific message for each instance in which information had to be derived. Bottom Press Enter to continue. F3=Exit F6=Print F9=Display message details F10=Display messages in job log F12=Cancel F21=Select assistance level Chapter 7. Database administration 237 Figure 7-72 Importing data in SQL Performance Monitor Here specify a name for the new Monitor you are creating and the file and library containing the previously collected data. 6. The new monitor name is added to the list shown at the SQL Performance Monitor item. Right-click it and select List Explainable Statements, as shown in Figure 7-73. Figure 7-73 SQL Performance Monitor - List Explainable Statement After you select this option, you are presented with the List Explainable Statements dialog (Figure 7-74). On this display, you can see the actual SQL statement generated by Query Manager to resolve the original Query/400 request. 238 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 7-74 List Explainable Statements dialog From here, you can link into Visual Explain to analyze the database behavior during the execution of this query. For more information on this subject, refer to Chapter 10, “Visual Explain” on page 301. © Copyright IBM Corp. 1994, 1997, 2000, 2001 239 Chapter 8. Database Navigator This chapter introduces you to the DB2 Universal Database for iSeries Database Navigator feature and its capabilities. It covers the following topics:  Finding Database Navigator  Finding database relationships prior to V5R1M0  Database Navigator maps  Database Navigator map interface  Objects to Display window  Database Navigator map display  Available options on each active icon on a map  Creating a Database Navigator map  Adding new objects to a map  Changing the objects to include in a map  Changing object placement and arranging object in a map  Creating a user-defined relationship 8 240 Advanced Functions and Administration on DB2 Universal Database for iSeries 8.1 Introduction The launch of DB2 Universal Database for iSeries Database Navigator, which is part of Operations Navigator in V5R1M0 Client Access Express, allows database administrators to map the complex relationships between database objects. The database component of Operations Navigator at V5R1M0 provides additional graphical interfaces for new functions that include:  The ability to create and manage tables, views, indexes, constraints, journals, journal receivers, and system (external) and SQL Triggers  The ability to graphically view the relationships between the various parts of an existing DB2 UDB database and save and update these visual maps with the push of a button  The ability to reverse engineer an existing database so that the database administrator can port the database to other iSeries servers as well as other platforms The relationships that you see on the Database Navigator map are the relationships between:  Tables (for example, referential integrity constraints)  Any indexes over the tables  Any constraints, such as primary, foreign, unique, and check  Any views over the tables  Any aliases for the tables, etc. 8.1.1 System requirements and planning Be sure you have an iSeries server with OS/400 V5R1M0, or higher, with:  5722-SS1: Option 12 - Host Servers  5722-TC1: TCP/IP Connectivity Utilities  5722-XE1: Client Access Express V5R1M0 8.2 Finding Database Navigator Database Navigator resides under the Database icon of Operations Navigator. Open the Operations Navigator window and click the desired iSeries server. Click the (+) sign next to the Database icon. Now, click the Database Navigator icon. The Database Navigator maps that are available appear. Database Navigator is part of the Database icon within Operations Navigator. There are three functions beneath the Database icon (Figure 8-1):  Libraries  Database Navigator  SQL Performance Monitors Note: Database Navigator is not intended to be a data modeling tool like some existing products in the industry. Chapter 8. Database Navigator 241 Figure 8-1 Database options within Operations Navigator The new Database Navigator feature allows you to perform many tasks, including:  Create a table  Create a view  Create an alias  Create a journal  Create an index  Create a map of your database  Create new SQL objects to be displayed in the map  View the properties of a map  View the properties of an object within a map  Generate SQL for an object within a map  Generate SQL for all objects within a map  Generate SQL for selected objects within a map  Generate SQL for visible objects in a map  Expand a table in a map  Collapse a table in a map  Add a referential constraint  Add a check constraint  Add user defined relationships to a map  Add a key constraint An option also exists for removing and adding objects to this relationship, such as journals and receivers. These are not selected as a default for the map because they may cause the map to be very complicated. To add these extra objects, click the Options menu as shown in Figure 8-2. 242 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-2 Viewing or changing the user preferences for the Database Navigator map After you select the user preferences option, you can see the default user preferences for creating a Database Navigator map. Figure 8-3 shows you the various objects that may be included on the map. From the Database Navigator map, you can directly affect the relationships on the database by adding or removing indexes, files, views, etc. Figure 8-3 Database Navigator user preferences 8.3 Finding database relationships prior to V5R1M0 To see the benefits of Database Navigator, you must find the relationship between database objects on an iSeries server that is not at Client Access Express V5R1M0. You must use several commands to achieve the same results that you get with Database Navigator. Some of the commands that are needed are:  DSPDBR FILE(SAMPLEDB16/ACT) OUTPUT(*PRINT) This command (Figure 8-4) shows the indexes, views, and constraints related to the selected table.  DSPFD FILE(SAMPLEDB16/ACT) TYPE(*CST) OUTPUT(*PRINT) This command (Figure 8-5) shows the details of the constraints built over the selected table. Chapter 8. Database Navigator 243  DSPFD FILE(SAMPLEDB16/ACT) TYPE(*ACCPTH) OUTPUT(*PRINT) This command (Figure 8-6) shows you the access path that is built over the selected table. You also need to use the WRKJRNA command to determine which files are journaled to other journals. Although the DSPFD command also shows this, you cannot obtain an overview without using these commands. It is possible to build a relationship map. However, it takes time and a great deal of effort. It is also very difficult for a new database administrator to envision the layout of an existing or new database and the effect of removing an index or constraint on other files. This can result in unnecessary resources allocated to files, indexes, and constraints that may not be needed. This is because the referential integrity map is only as good as the last time the database administrator actually checked the authenticity of the map that was previously created. The AFP viewer screens show the commands mentioned in the previous list (Figure 8-4 through Figure 8-6). Figure 8-4 Display Database Relations display Figure 8-5 Display File Description display showing constraints on a particular file 244 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-6 Display File Description display showing access paths The entire process for creating a physical or mental picture of which table is related to which index is very difficult to administer. The practical difficulties of keeping this picture up-to-date requires time and effort on the part of the database administrator. It is also difficult to explain the entire picture when doing training for new staff, and it requires valuable time and effort on the part of the database administrator. The process is simplified with the new Database Navigator feature of Client Access Express V5R1M0. 8.4 Database Navigator maps Database Navigator enables you to visually depict the relationships of database objects on your iSeries server. The visual depiction you create for your database is called a Database Navigator map. In essence, the Database Navigator map is a snapshot of your database and the relationships that exist between all of the objects in the map. Click the Database Navigator icon to bring up the list of Database Navigator maps that are available in the system. The maps appear in the right-hand side of the Operations Navigator window as shown in Figure 8-7. Chapter 8. Database Navigator 245 Figure 8-7 Database Navigator maps Double-click the database map you want to view. The Database Navigator map window with the selected map appears as shown in Figure 8-8. Figure 8-8 Database Navigator map Be aware that the map shown is the Database Navigator map at the time that it was saved. This means that things may have changed on the system since the map was created and saved. To view an up-to-date picture of the database, refresh the map by clicking the View menu and selecting the Refresh option as shown in Figure 8-9. 246 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-9 Refreshing a map The Database Navigator maps are stored on the iSeries server. Only one person at a time can work on the map to ensure integrity. The maps are locked when they are being used. Because they are stored on the iSeries server, you must have a connection to the system to be able to open a map. You can save multiple maps of the same database reflecting the database design at certain points in time. For this, you use different names. Whenever you want to compare how the database design has changed, you open and print the appropriate maps. Using the printouts, you can compare how the database design has changed over time. Note: For a complete explanation of the icons, refer to 8.8, “The Database Navigator map icons” on page 269. 8.5 The Database Navigator map interface As stated previously, the Database Navigator map provides a graphical interface that allows the database administrator to see the layout of the various objects in the database. One of the new functions added for V5R1M0 is the task pad. This is located in the lower part of the Operations Navigator window. If you click on the various higher level options, such as Security, Users and Groups, Database, etc., the task pad changes accordingly to present you with the options that are available when the database functions is selected as shown in Figure 8-10. The options available on the Database task pad are:  Select libraries to display  Create new summary SQL performance monitor  Create new detailed SQL performance monitor  Map your database  Run an SQL script Chapter 8. Database Navigator 247 Figure 8-10 Database task pad options Click the option in the task pad to create a map of your database. The window shown in Figure 8-11 appears. Figure 8-11 Default map display The primary workspace for Database Navigator is a window that is divided into several main areas as shown in Figure 8-12. These areas allow you to find the objects to include in a map, show and hide items in a map, view the map, and check the status of changes pending for a map. The following list describes the main areas of the Database Navigator window:  Locator pane The locator pane, on the left side of the Database Navigator window, is used to find the objects that you want to include in your new map, or to locate objects that are part of an open map. The upper Locator Pane is a search facility that can be used to specify the Name, Type, and Library of the objects that you want to include in the map. The results of the search are displayed in the lower Locator Pane under the Library Tree and Library 248 Advanced Functions and Administration on DB2 Universal Database for iSeries Table tabs. When the results are displayed under these tabs, you can add objects to the map by right-clicking an object and selecting Add to Map or double-clicking the object name. Then, when the map is created, you can see a list of the objects in the map by clicking the Objects In Map tab. The Locator Pane is divided in two parts: – The upper locator window This window allows you to search for database objects on the iSeries server. When an object is found, it is placed in the object window: • On the search criteria, you can specify single objects or search for generic names using the * (for example, EMPLE*). • You can specify all object types or indexes, tables, and views. • You can specify one library from your library list or all libraries to search on. – The lower locator This window has three parts: • The library tree: This can either show individual libraries, libraries in your list, or all libraries on the system. • The library table: This shows the tables, indexes, or views of the libraries in the library tree. • Objects in map: This shows all of the objects in the map, whether they are hidden or not. Within this display, you can select or deselect objects to be placed in the map. Figure 8-12 Three main windows in the Navigator map display Note: Any changes that are made using the search criteria require that you click the Search button to change the library tree or the library table displays. Main Map Window Upper Locator Lower Locator Chapter 8. Database Navigator 249  Map pane The map pane, on the right side of the Database Navigator window, graphically displays the database objects and their relationships. In the Map pane, you can: – Add tables and views that exist on the system, but were not originally included in the current instance of the map – Remove objects from the map – Change object placement – Zoom in or out on an object – Make changes to objects in the map – Generate the SQL for all objects in the map These windows are the main interface that allow you to change what you see in the main map window, search for other objects to add to the map, and move the objects around within the map to make it easy to read.  Object status bar: This part of the window consists of three parts (Figure 8-13): – Object Status Bar: This displays the number of objects that are visible in the Database Navigator map and how many are eligible to be added to the map. – Action Status Bar: This provides a clear description of the actions taken that affect the map and any modifications that are pending. – Modification Status Bar: This indicates whether a modification has been made or is pending. Figure 8-13 The status bar The Database Navigator map display also supports the following menu options:  File menu: You can select a number of options, including: – New: Allows you to create a new map – Open: Allows you to open a previously saved map – Close: This closes the currently open map – Save: Allows you to save the current map with which you are working – Save as: Allows you to save the current map you are working with and change the name and location if the map has previously been saved – Print: This option allows you to print to a previously defined printer – Exit: This option closes the map of your database screen Note: If changes are made to the map, or if this is a new map, you are prompted as to whether you want to save the map. Object Status Action Status Modification Status 250 Advanced Functions and Administration on DB2 Universal Database for iSeries  View menu: The following options are available: – Zoom • In: Allows you to zoom in on the map • Out: Allows you to zoom out on the map • Fit to Window: Allows you to fit the map to the current window size • To Selected Objects: Positions the window to the object that has been selected – Refresh: This updates the database map with any changes that are made. – Object Spacing: This allows you to increase or decrease the vertical and horizontal spacings of the objects in the map. – Show Overview Window: This brings up a window (Figure 8-14) that allows you to see an overview of the map currently open. This overview allows you to position the main screen to any part of the map. This is particularly useful on very large or complicated maps. – Show objects of type: This allows you to add objects to the map, such as aliases, journals, etc. – Arrange: This allows you to change the map back to the original settings. Figure 8-14 Overview windows showing how moving the overview box changes the main map display window  Options menu: The following option are available: – User Preferences: This allows you to change the objects that appear on the map as it is created (Figure 8-15). Chapter 8. Database Navigator 251 Figure 8-15 User Preferences display – Change List of Libraries: This allows you to change the libraries that are displayed (Figure 8-16). Figure 8-16 Change list of libraries display If a new library is typed on the Enter Libraries box, instead of selecting from a list, the system ensures that the library exists before allowing the object to be added to your list.  Map menu: The Generate SQL option appears with the following sub-options: – For all objects – Selected objects – Visible objects For each of these options, the system creates the SQL script used to generate the objects, and it prompts the Run SQL Scripts window to appear as shown in Figure 8-17. 252 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-17 Generate SQL window The Run SQL Scripts window allows you to see the SQL statement that was used to create the object and it allows you to take individual tables, whole databases, or entire maps of objects to other iSeries servers or SQL platforms. The Create option includes the following sub-options:  Journal: This allows you to create a new journal.  Table: This allows you to create a new table.  View: This allows you to create a new view.  User Defined Relationship: This allows you to create a user-defined relationship. This helps by allowing the database administrator to add relationships of important tables, of the database, and so on. This function is likely used to illustrate a referential integrity constraint that is implemented on the application logic and is not defined in the database. This can also be used to illustrate relationships that are not physical in the map for debug or education purposes. A tool bar exists that has a lot of the functionality previously mentioned. It includes the following features:  Show or hide Indexes  Show or hide views  Show or hide journals  Show or hide journal receivers  Show or hide primary key constraints  Show or hide check constraints  Show or hide unique key constraints  Show or hide table aliases  Show or hide view aliases Note: If the objects are not available to hide or view, the button is grayed out. Chapter 8. Database Navigator 253 8.5.1 Objects to Display window Within the Objects to Display window, only one option is available – the Find in Map option. This option allows you to find a specific object in the map. When this option is selected, the chosen object appears in the selected map window. 8.5.2 Database Navigator map display The Database Navigator map main display is another interface for managing and changing your database using Operations Navigator. Each object on the Database Navigator map is active, and various options are available. From the main display, you can add objects to a map, create new objects, etc., as previously described. This section explains the various functions available to you from this display. Right-click the main window to view the following menus (Figure 8-18):  Create: You can create journal, tables, views, and user-defined relationships by choosing this option.  Zoom: You can zoom in or out and make the map fit the window by selecting this option.  Generate SQL: You can generate the SQL for all objects or only the visible options by selecting this option.  Remove all line bends: This removes all bends in the database map joining arrows.  Arrange: Returns the objects within the map to there position at creation even if the map was saved previously.  Properties: This shows you a properties display of the map itself. Figure 8-18 Options that are available when you right-click the map As previously stated, each object type is active. By right-clicking the objects, you can access several different options. Note: A map is saved on the iSeries server as an object type of *FILE. 254 Advanced Functions and Administration on DB2 Universal Database for iSeries Flyover Because each object is active, there is a new function that allows you to view a brief description of an object within the map simply by placing the cursor over the object. This is called a flyover. Depending on the type of object, different information types appear. The basic display for each object shows the object name, the system name on which the object resides, the library, and the type of object as shown in Figure 8-19. Figure 8-19 Example flyover display After you refresh the display, a window, like the example in Figure 8-20, appears while the refresh runs. Figure 8-20 Refresh on database in progress After the map is built or refreshed, you can manipulate the objects however you want. From within the map display, you can actually move the icons around to suit your requirements. Chapter 8. Database Navigator 255 8.6 Available options on each active icon on a map This section discusses the options that are available to you from within the map display. These options are available by right-clicking each of the different objects in the map. 8.6.1 Table options Figure 8-21 shows the various options available to you when you are using the active icon for a table within a Database Navigator map. All objects on a map are active, and they enable you to manipulate the object without leaving the map. Figure 8-21 Flyover display of a table within a Database Navigator map Right-click a table within the map. A window appears as shown in Figure 8-22. Figure 8-22 Right-clicking a table within a Database Navigator map The options that appear include:  Open: This allows you to open the file for update.  Quick view: This shows you the file and its contents (read only). 256 Advanced Functions and Administration on DB2 Universal Database for iSeries  Description: From within this option, there are six tabs: – General: This shows the size of the object, the current number of rows, the number of deleted rows, and whether the table reuses deleted records. – Allocation: This shows the settings for the maximum number of rows, the initial number of rows, the increment of the number of rows, the maximum number of increments, and other options. – Access Path: This shows the current size of the access path, the maximum size, the maximum key length, whether the access path is valid or shared, whether it is journaled, what the maintenance and recovery of the access path is set to, and other options (Figure 8-23). Figure 8-23 Access Path display – Usage: This shows you the creation date of the table, the last used date, the last changed date, and other details of the table. – Activity: This shows the latest activity on the table since the last machine restart. – Details: This shows the creation date of the table, the maximum row length, and more.  Journaling: This specifies whether journaling is on.  Locked Rows: This shows any rows that are locked on the table.  Create Alias: This allows you to create an alias for the table.  Reorganize: This allows you to reorganize the file by compressing deleted records (by table key or by a selected index).  Permissions: This allows you to set security for a table.  Expand (new function): This shows additional details of the table, such as the columns and indexes built over the table.  Collapse (new function): This returns the display to the default setting for the table.  Generate SQL (new function): This creates an SQL script window (Figure 8-24) that allows you to recreate the table or multiple objects depending on the option selected from the Generate SQL screen. The Generate SQL function is new for V5R1M0 and is available Chapter 8. Database Navigator 257 for individual objects or entire databases. This option is available through the Database Navigator map and through the library display within the database option in Operations Navigator. This option is discussed later in more detail. Figure 8-24 Generate SQL Script window  Remove From Map (new function): This removes a particular view from the map. If the object is not included in the map, you see the Add to map function highlighted.  Delete: This allows you to delete a particular table.  Rename: This allows you to rename the table.  Properties: This shows you a display of the properties of a table. 8.6.2 Index options Right-click an index to access the options shown in Figure 8-25. 258 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-25 Right-clicking an index The primary options are:  Description: In this option, there are three views: – Access path: See Figure 8-23 for more details on this screen – Usage: This shows you the creation date of the index, the last used date, the last changed date, and other details of the index. – Details: This shows the creation date of the index, the maximum row length, and more.  Permissions: This allows you to set security for the object. 8.6.3 Constraint options If you right-click any of the constraints on the map, the following options appear:  Generate SQL (new function): This creates an SQL script window (Figure 8-24).  Remove from map (new function): This removes a particular constraint from the map. If the object is not included in the map, the Add to map function appears highlighted.  Properties: This shows you the properties of the table over which the constraint is defined as shown in Figure 8-26. Chapter 8. Database Navigator 259 Figure 8-26 File properties window showing constraints 8.6.4 View options Right-click any view on the map to see the following options:  Quick View: This shows you the view contents (read only).  Description: – Usage: This shows you the creation date of the view, the last used date, the last changed date, and other view details. – Details: This shows the creation date of the view, the maximum row length, and additional details.  Create Alias: This allows you to create an alias for the view.  Permissions: This allows you to set security for the view.  Generate SQL (new function): This creates an SQL script window that allows you to recreate the table or multiple objects depending on the option selected from the generate SQL screen. The Generate SQL function is new for V5R1M0 and is available for individual objects or entire databases. This option is available through the Database Navigator map and also through the library display within the database option in Operations Navigator.  Remove (new function): This removes a particular view from the map. If the object is not included in the map, the Add to map function appears highlighted.  Properties: This shows you the properties of the view. If it is an SQL view, it shows the SQL statement used to create the view. If it is a logical file, a message appears stating that it was not created in SQL and, therefore, it cannot be shown.  Hide (new function): This allows you to remove the view from the map only. 260 Advanced Functions and Administration on DB2 Universal Database for iSeries 8.6.5 Journal options If you right-click a journal, the options shown in Figure 8-27 appear. Figure 8-27 Right-clicking a journal The various options include:  Start or End Table Journaling: This allows you to end or start journaling on any table on the system to the selected journal.  Swap receivers: This allows you to perform the equivalent of a CHGJRN *GEN from a normal green-screen command.  Permissions: This allows you to set security for the journal.  Delete: This allows you to delete a particular journal.  Remove from map (new function): This allows you to remove a particular journal from the map. If the object is not included in the map, the Add to map function appears highlighted.  Hide (new function): This allows you to remove the journal from the map only.  Properties: This shows you the properties of the journal. 8.6.6 Journal receiver options The various Journal receiver options include:  Permissions: This allows you to set security for the journal receiver.  Delete: This allows you to delete a particular journal receiver.  Remove from map (new function): This allows you to remove a particular journal receiver from the map. If the object is not included in the map, the Add to map function appears highlighted.  Hide (new function): This allows you to remove the journal receiver from the map only.  Properties: This shows you the properties of the journal receiver. Chapter 8. Database Navigator 261 8.7 Creating a Database Navigator map The visual depiction that you create of your database is called a Database Navigator map. To create a Database Navigator map, you need to follow these steps: 1. In the Operations Navigator window, expand your server Database. 2. Right-click Database Navigator and select New from the pull-down menu to create your Map as shown in Figure 8-28. Figure 8-28 Database Navigator option 3. The Operations Navigator library list appears in the left side of the Database navigator window. Double-click the SAMPLEDB04 library to expand the objects. 4. Double-click Tables in the Locator Pane to expand all the tables in a database. 5. Double-click the EMPLOYEE table on the lower Locator Pane to start building a map. This table is added to the map and all related objects, as shown in Figure 8-29. 262 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-29 Selecting a database to build a map 6. The map is built from the cross reference files (XREF) on the iSeries server. The relationship and statistics are based from the table that you selected to generate a map as show in Figure 8-30. Figure 8-30 Building a Database Navigator map 7. Click the minus (-) sign next to the SAMPLEDB04 database object to collapse the tree view. Chapter 8. Database Navigator 263 8. Use the vertical and horizontal scroll bars to navigate the map on the Database Navigator window as shown in Figure 8-31. Figure 8-31 Database Navigator map 9. You can save this map by selecting File-> Exit from. Then, if changes are pending, select Yes on the Save Changes To dialog. This map can be reopened at a later time. Once you create the map of your database, you can:  Add new objects to a map  Change the objects to include in a map  Create a user-defined relationship 8.7.1 Adding new objects to a map With Database Navigator, you can create new SQL objects to add to your map. Among the objects that can be created are:  Tables  Journals  Views Note: You can also click the Map your database task on the task pad at the bottom of the Operations Navigator window to create a map. 264 Advanced Functions and Administration on DB2 Universal Database for iSeries To create new SQL objects to be displayed in a map, follow these steps: 1. Open a Database Navigator map. 2. Click the View menu. From the pull-down menu, select Show Objects of Type -> Views to include all Views in the map. The Object Status Bar that was updated with the new objects included in the map appears. Figure 8-32 Adding Views objects in the map 3. Use the vertical and horizontal scroll bars to navigate to the top of the map. 8.7.2 Changing the objects to include in a map By default, Database Navigator searches for and includes all objects in your map. To limit the number of objects that are searched for, you can change the user preferences. To change which objects to include in the map, follow these steps: 1. Open a Database Navigator map. 2. From the Options menu, select User Preferences. 3. On the User Preferences dialog, in the When adding an object to the map find these related objects group box, select the objects you want to include, or deselect the objects you do not want to include. 4. Click OK. 5. If you want to refresh the map with the new preferences, click Yes in the Information box. Important: You can change the zoom level of the Database Navigator map to manage how much of the map you can see in the map pane on the Database Navigator Window. Chapter 8. Database Navigator 265 8.7.3 Changing object placement and arranging object in a map When you have a map, it is possible to arrange and move objects in the map. You also learn how to remove the bends that appear on the relationship lines after the objects is moved to the new location. 1. Double-click the EMPLOYEE table from the list of tables to find this table in the map. 2. Drag-and-drop the EMPLOYEE table to the left as shown in Figure 8-33. 3. Right-click every relationship line and select Remove Bends to remove all bends. Figure 8-33 Changing object placement 4. Right-click in a free space in the map pane in the Database Navigator window. The Arrange function appears. Important: When you use the arrange option, it removes any customized object position or relationship line that you have created. The Arrange option puts the map back in a Default state. 266 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-34 Arrange objects in the map 5. Select Arrange to minimize the line crossing the map. 8.7.4 Creating a user-defined relationship As explained previously, when you have relationships that are defined by your programs, you can create a user-defined relationship in Database Navigator so that your relationship is displayed in the map. An example of this may be creating a user-defined relationship to remind programmers of an important join between two tables. To add a user-defined relationship to your map, complete these steps: 1. Open a Database Navigator map. 2. Right-click in a free space on the map pane in the Database Navigator window. Select the Create function as shown in Figure 8-35. 3. Select Create, and then select User-Defined Relationship to create the new object (UDR). Chapter 8. Database Navigator 267 Figure 8-35 Selecting the function to create a user-defined relationship 4. Specify a name and a description for the user-defined relationship. Unlike some Operations Navigator functions where the description is optional, it is important to provide a meaningful description for your user-defined relationship because it is the only way for you to indicate what the user-defined relationship represents as shown in Figure 8-36. 5. Select the objects that you want to include in the relationship by selecting from the list of objects (Figure 8-36) 6. Choose the shape and color you want for the object (Figure 8-36). 268 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 8-36 Creating a user-defined relationship 7. Click OK to create the user-defined relationship. The map should show a user-defined relationship (UDR) as shown in Figure 8-37. Figure 8-37 Flyover view of a user-defined relationship Chapter 8. Database Navigator 269 8.8 The Database Navigator map icons The icons that you may encounter on the Database Navigator map are shown in Table 8-1. Table 8-1 Database Navigator map icons The Library icon is used in the Database Navigator map display to show a library. The Table icon is used in the Database Navigator map to show a table. The Table Alias icon is used in the Database Navigator map to show table aliases. It also is used as a toolbar icon for adding or removing a table alias from the Database Navigator map. The Index icon is used in the Database Navigator map to show an index. The Journal icon is used in the Database Navigator map to show a journal. It is also used as a toolbar icon for adding or removing a journal from the Database Navigator map. The Journal Receiver icon is used in the Database Navigator map to show a journal receiver. It is also used as a toolbar icon for adding or removing a journal receiver from the Database Navigator map. The Primary Key Constraint icon is used in the Database Navigator map to show a primary key constraint. It is also used as a toolbar icon for adding or removing a primary key constraint from the Database Navigator map. The Check Key Constraint icon is used in the Database Navigator map to show a check key constraint. It is also used as a toolbar icon for adding or removing a check key constraint from the Database Navigator map. The Unique Constraint icon is used in the Database Navigator map to show a unique constraint. It is also used as a toolbar icon for adding or removing a unique constraint from the Database Navigator map. The Foreign Key Constraint icon is used in the Database Navigator map to show a foreign key constraint. The View icon is used in the Database Navigator map to show a view. It is also used as a toolbar icon for adding or removing a view from the Database Navigator map. 270 Advanced Functions and Administration on DB2 Universal Database for iSeries The Show/Hide Index icon is used on the toolbar to add or remove an index from the Database Navigator map. The Show/Hide Alias icon is used on the toolbar to add or remove an alias from the Database Navigator map. Left-click this icon to set the zoom on the map so that it fits the current window size. Left-click this icon to increase the level of zoom on the map at the position of the cursor. Left-click this icon to decrease the level of zoom on the map at the position of the cursor. Left-click this icon to invoke the Overview window function. This allows you to position your Database Navigator map panel to any part of a map. Left-click this icon to decrease the horizontal level of spacing between objects on the map. Left-click this icon to increase the horizontal level of spacing between objects on the map. Left-click this icon to decrease the vertical level of spacing between objects on the map. Left-click this icon to increase the vertical level of spacing between objects on the map. © Copyright IBM Corp. 1994, 1997, 2000, 2001 271 Chapter 9. Reverse engineering and Generate SQL Reverse engineering is one of the major changes that have been included in V5R1M0. This function allows you to create SQL for a given schema, table, index, view, etc., and all related objects to them if that option is selected. This enables database administrators to recreate, create duplicates, and port to other iSeries servers entire databases or particular parts of a database. This chapter includes:  What Generate SQL is  Reverse engineering an existing database  Generating SQL DDL statements from a DDS created database 9 272 Advanced Functions and Administration on DB2 Universal Database for iSeries 9.1 Introduction The new Generate SQL function is often referred to as “reverse engineering for Operations Navigator” because it provides a GUI interface that allows you to reverse engineer several types of database objects. The results are SQL create statements (often referred as DDL statements). The Generate SQL function of Operations Navigator allows you to reconstruct SQL statements used to create existing database objects. With this function, you can reverse engineer database objects and then have the option to display the resulting SQL in the Run SQL Scripts window or saving the output to a file. Using the existing Run SQL Scripts functions, you can then edit, run, and save the SQL statement to a file on the PC. The new Generate SQL Database Objects support the following objects:  Aliases  Distinct types  Functions  Indexes  Procedures  Schemas (collections) and libraries  Tables and physical files  Views and logical files 9.1.1 System requirements and planning Before you Generate SQL, be sure the following prerequisites are available:  5722-SS1: Option 12 - Host Servers  5722-TC1: TCP/IP Connectivity Utilities  5722-XE1: Client Access Express, V5R1M0, with the latest Service Pack applied 9.1.2 Generate SQL Reverse engineering (Generate SQL) allows you, through the Database Navigator map and the Libraries display of Operations Navigator, to re-engineer an SQL database or an iSeries database that were not created using SQL. One of the uses of Generate SQL is to generate the SQL statements of tables, views, indexes, and constraints that were created using the Operations Navigator interface. For example, when you create a table using Operations Navigator, there is no method for saving the SQL statement that is behind the interface. In this case, Generate SQL provides a way to reverse engineer this object and obtain the SQL statement. The Generate SQL function of Database Navigator also creates the SQL statements of databases created by DDS (physical and logical files). You must be aware that keyed-logical files are converted to SQL views. When the Generate SQL process creates the Run SQL script for the selected object, it either marks any problem objects with SQL messages or it does not create the SQL for the object if it is not supported. You can create a Run SQL Script from object context or from schema context. Chapter 9. Reverse engineering and Generate SQL 273 The object context can be invoked from either the Database Navigator map or the Operations Navigator Library display. To do this, right-click the object and select the Generate SQL option. There is a difference between what appears when using the two methods. If the Generate the SQL option is selected from the Library display, the information shown in Figure 9-1 appears. Figure 9-1 Operations Navigator Generate SQL display The display shown in Figure 9-1 allows you to add or remove objects that will be re-engineered. This method allows you to change the objects that are selected and the standard by which they are generated, the format of the Run SQL script (Figure 9-2), and the options used to create the SQL script (Figure 9-3). 274 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-2 Generate SQL format options Figure 9-3 Generate SQL options The options that you can define include:  Standards options: This allows you to select which standards option you want for the generated SQL. The option that you choose affects the syntax of the generated SQL and ultimately how the Generate SQL runs. You may edit this value using the following sub-options: – ANSI/ISO: Select this option to allow the generation of SQL that can be executed on other ANSI/ISO SQL standard compliant databases. Chapter 9. Reverse engineering and Generate SQL 275 – DB2 UDB family: Select this option to allow the generation of SQL for use on other DB2 family platforms. – DB2 UDB with iSeries extensions: Select this option to allow the generation of SQL for use on other iSeries servers. Note: As a general guideline, if you want to generate SQL that is run on other DB2 platforms, select DB2 UDB. In addition, if the platform is another iSeries server, choose to include iSeries extensions. The choice that you make for the standard can affect subsequent formatting choices.  Generate labels: Select this option to include SQL labels and comments to be inserted into the generated SQL.  Format statements for readability: Select this option to format the generated SQL statements with end-of-line characters, tab characters, and spaces.  Include informational message: Select this option to include informational messages in your generated SQL. You should always include informational messages whenever you generate SQL for an object created using Data Description Specification (DDS). DDS is used to describe data attributes in file descriptions that are external to the application program that processes the data. You can then determine if you need to make changes to the generated SQL for it to run correctly. Once you make all the necessary changes, you may want to generate the SQL without the informational messages. Note: If the object for which you are generating SQL was originally created using SQL, there should not be any informational messages.  Include drop statements: Select this option to include drop statements for the objects for which you are generating SQL. The drop statements are inserted before the first Create SQL statement. This allows you to drop the object and then recreate it. Click the Generate button to prompt the system to generate the SQL and bring up the Run SQL script window (Figure 9-4). Figure 9-4 Generate SQL Run SQL Scripts window 276 Advanced Functions and Administration on DB2 Universal Database for iSeries One of the major advantages of the Generate SQL function is that the SQL can be ported to other iSeries servers and even to other platforms supporting SQL. This applies particularly to CASE tools that can use the Run SQL Script as input to recreate the database on other platforms. 9.2 Generating SQL from the library in Operations Navigator With Generate SQL, there is an option from your library in the Operations Navigator window to generate the SQL DDL statement for some objects. To generate this statement, follow these steps: 1. Start Operations Navigator. Click the iSeries server that you want to access (Figure 9-5). Once you have entered your user ID and password, expand the Database option. Figure 9-5 Operations Navigator 2. Under Database, click Libraries. Then select the library name, which in our case is SAMPLEDB04, for your iSeries server connection (Figure 9-6). Chapter 9. Reverse engineering and Generate SQL 277 Figure 9-6 Find library 3. Click the SAMPLEDB04 library to display the current content in the right window panel. On the right panel, press the Ctrl key, and locate and select the following tables as shown in Figure 9-7: – ACT – CL_SCHED – DEPARTMENT – EMP_PHOTO – EMP_RESUME – EMPLOYEE 278 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-7 Selecting objects from the library to generate SQL Figure 9-8 Generate SQL window 4. Click Generate to accept the default values as shown in Figure 9-9. Important: When the Generate SQL function is invoked, the new Generate SQL window appears as shown in Figure 9-8. This window provides a list of the objects initially selected and three tabs that specify Output, Format, and Options that are used in the Generate SQL. Chapter 9. Reverse engineering and Generate SQL 279 Figure 9-9 Generate SQL display 5. Switch to the new Run SQL Scripts window to see the generated SQL statement. Important: The initial list of objects in the Generate SQL window could be modified using the Add and Remove buttons to add new objects or remove objects from the initial list. 280 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-10 SQL generated in the Run SQL Scripts window 6. Click File and select Save As... from the pull-down menu to save the SQL script as shown in Figure 9-11. Chapter 9. Reverse engineering and Generate SQL 281 Figure 9-11 Saving the SQL Script 7. Click Save to save the SQL script file. 9.2.1 Generating SQL to PC and data source files on the iSeries server You can generate the SQL statements to a PC file and to a source member on the iSeries server. Let’s start by showing you how to generate the SQL statements of a group of objects to a PC file: 1. Start Operations Navigator. 2. Right-click the SAMPLEDB04 library (example in our case). Then select Generate SQL as shown in Figure 9-12. Important: You can use the SQL file to replicate your database files on another system (for example, a development system). 282 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-12 Generate SQL library in Operations Navigator 3. In the Generate SQL window, select the Write to file option on the Output tab as shown in Figure 9-13. The generated SQL is saved to a PC file. Figure 9-13 Selecting Generate SQL to PC 4. Click File Type and select the PC file option. 5. In the Location file, click Browse. Then select your directory (c:\DB2NAVSQL) from the pull-down menu to save your file. Chapter 9. Reverse engineering and Generate SQL 283 6. In the File name input field, type GENSQL042.SQL. In the Files of type input field, leave the default SQL files (.sql) as shown in Figure 9-14. Figure 9-14 Saving the SQL script to a PC file 7. Click the Select button to return to the Generate SQL tab. Figure 9-15 Select button 8. Click the Generate button to start generating the SQL DDL statements for all the objects in the library. A status window appears showing the progress of the generate SQL process as a percentage (Figure 9-16). 284 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-16 Generating SQL window 9. In the Operations Navigator window, click the Run SQL Script icon in the database task pad as shown in Figure 9-17. Figure 9-17 Selecting the Run SQL Script from the task pad option Important: One of the new functions added in V5R1 is the task pad (located in the lower part of the Operations Navigator window). If you click the various higher level options, such as Security, Users and Groups, Database, etc., this task pad changes accordingly. One of the database tasks of the task pad is Run SQL Script. Chapter 9. Reverse engineering and Generate SQL 285 10.In the Run SQL Scripts window, click File and select Open from the pull-down menu to open your SQL Script file (GENSQL042). 11.Click Look in and select your directory (C:\DBNAVSQL) from the pull-down menu to save your file. 12.Select your GENSQL042 file and click Open as shown in Figure 9-18. Figure 9-18 Restoring an SQL script file from a PC 13.View the SQL statements generated on the Run SQL Script window as shown in Figure 9-19. Take some time to analyze the order of the statements. Figure 9-19 SQL Script statement generated Important: Once the statements are generated, you can edit them to create a new copy in another library and optionally saved, or you can run them using the SQL Script facility. If multiple objects were selected to be SQL Generated, you have the option to run one, some, or all of the statements after any required editing. 286 Advanced Functions and Administration on DB2 Universal Database for iSeries Let’s see now how to generate the SQL statements of a group of objects to a source physical file on the iSeries server: 1. Click the SAMPLEDB04 library to display the content in the right window panel. 2. Click File and select Generate SQL... from the pull-down menu to view the Generate SQL window as shown in Figure 9-20. This is another way to generate SQL for a group of objects. Figure 9-20 Selecting Generate SQL from the File menu 3. In the Generate SQL window, click the Write to file option in the Output tab as shown in Figure 9-21. Chapter 9. Reverse engineering and Generate SQL 287 Figure 9-21 Generate SQL: Selecting Write to file 4. Click File Type and select the database source file. 5. Click Library and select the SAMPLEDB04 library in our case. 6. In the File Name input field, type GENSQL043. In the Member input field, type GENSQL043. 7. Click the Generate button to start the Generate SQL process on the iSeries server. Figure 9-22 Starting the Generate SQL process on the iSeries server 288 Advanced Functions and Administration on DB2 Universal Database for iSeries 8. Double-click GENSQL043 to see the script on the Operations Navigator window as shown in Figure 9-23. Figure 9-23 Selecting the source physical file to show the Generate SQL Script 9. Expand the window, and use the scroll bar to explore the script file as shown in Figure 9-24. Note: For existing files, the option to append to the file is provided. If an existing file is selected, and the append option is not chosen, you are asked if you want to overwrite the existing file. Chapter 9. Reverse engineering and Generate SQL 289 Figure 9-24 Exploring the SQL Script file from Operations Navigator 9.2.2 Generating SQL from the Database Navigator map It is also possible to generate the SQL DDL statement from some or all objects in a map generated by the Database Navigator feature (Chapter 8, “Database Navigator” on page 239). 1. Click the Database Navigator icon to display the maps on the right that exist on the iSeries server as shown in Figure 9-25. 290 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-25 Opening the Database Navigator map 2. Double-click to open the database map that you created. 3. Click the View menu and select Zoom-> To Fit Window from the pull-down menu to fit all objects on the map in this window as shown in Figure 9-26. Chapter 9. Reverse engineering and Generate SQL 291 Figure 9-26 Fitting all objects in a map 4. Use the vertical and horizontal scroll bars to navigate to the top of the map as shown in Figure 9-27. 292 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-27 Viewing all objects includes in the map 5. Use the criteria selection in the locator pane and select only your SAMPLEDB04 library. Click the Library parameter to select your library as shown in Figure 9-28. Chapter 9. Reverse engineering and Generate SQL 293 Figure 9-28 Selecting only your sample library to appear in the Database Navigator map 6. Click the plus (+) sign next your SAMPLEDB04 database to see the found objects, such as tables, indexes, and views. 7. Click the (+) sign next to the Tables database object to expand it. 8. Double-click the EMPLOYEE table in the list of tables to find this table in the map. 9. Right-click the EMPLOYEE table and select Generate SQL... as shown in Figure 9-29. 294 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-29 Generating SQL for a specific object from the map 10.In the Run SQL Script window, explore the Generated SQL statement, using the scroll bar to navigate as shown in Figure 9-30. Chapter 9. Reverse engineering and Generate SQL 295 Figure 9-30 Generating SQL from the employee table 11.Click File and select Save As... from the pull-down menu to save the SQL script as shown in Figure 9-31. 296 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 9-31 Saving the Script SQL 12.On the Save window, click your directory (C:\DBNAVSQL) from the pull-down menu to save your file. 13.In the Name input field, type GENSQL044. In the Type input field, leave the default SQL files (.SQL) as shown in Figure 9-32. 14.Click Save to save the SQL script file. Figure 9-32 Saving the SQL Script file Now let’s see how to generate the SQL DDL statements for all the objects in a library. 1. Switch to the Database Navigator window. 2. Click the Map option and select Generate SQL from the pull-down menu. Click All Objects... to generate the SQL statement for all objects in your library as shown in Figure 9-33. Chapter 9. Reverse engineering and Generate SQL 297 Figure 9-33 Generate SQL for all objects in a library 3. A status window appears showing the progress of the Generate SQL as a percentage. 4. Click File and select Save As.... from the pull-down menu to save the map. 5. In the Save window, click to select your directory (C:\DBNAVSQL) from the pull-down menu to save your file. 6. In the File name input field, type GENSQL045. In the File of type input field, leave the default as SQL files (.SQL). 7. Click Save to save the SQL script file. 8. Click File and select Exit from the pull-down menu to close the Run SQL Script window. 9.2.3 Generating SQL from DDS The Generate SQL function works with objects created using SQL and also with objects that were created using DDS. These objects can also be reverse engineered into an SQL create statement. This is a way to start migrating or changing existing DDS created databases to SQL. Let’s see how to reverse engineer an existing DDS created database: 1. Click the plus (+) sign next to the Libraries object to expand the list of libraries. 2. Change the list of libraries in Operations Navigator to include the library that has DDS created objects. For this example, let’s say it is DDSLIBXX. Click DDSLIBXX. 298 Advanced Functions and Administration on DB2 Universal Database for iSeries 3. Right-click the DDSLIBXX library and select Generate SQL as shown in Figure 9-34. Figure 9-34 Selecting physical files to generate an SQL statement 4. Leave the default options. Click the Generate button in the Generate SQL window. 5. The SQL Script Center appears with the generated SQL DDL statements posted in the working area as shown in Figure 9-35. Chapter 9. Reverse engineering and Generate SQL 299 Figure 9-35 Exploring SQL script generated from physical files Important: There are some DDS-specific keywords that cannot be converted to SQL. This appears in the code as messages SQL150C and SQL509 (see Figure 9-35). 300 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 301 Chapter 10. Visual Explain The launch of DB2 UDB for iSeries Visual Explain with Operations Navigator V4R5M0 was of great interest to database administrators working in an iSeries server environment. The product has been described as a quantum leap forward in database tuning for query optimization. Visual Explain provides an easy to understand graphical interface that represents the optimizer implementation of the query. For the first time, you can see, in graphic detail, how the optimizer has implemented the query. You can even see all of the facts and figures that the optimizer used to make its decisions. Best of all, the information is presented in one place, in color, with easy to follow displays. There is no more jumping between multiple windows, trying to figure out what is happening. Even better, if you currently have Operations Navigator, you already have Visual Explain. With all of this in mind, is such a richly featured product complicated to use? As long as you are familiar with database tuning, you will enjoy using Visual Explain and want to learn more. This chapter answers these questions:  Where do I find Visual Explain?  How do I use it?  What can it be used for?  Will it tune my SQL queries?  What about green-screen queries and those slow running batch jobs? 10 Note: The Visual Explain tool is most effectively used when you have a firm understanding of the DB2 Universal Database for iSeries query optimizer and database engine. The recommended way to obtain this skill and build this understanding is to attend the classroom-based S6140 - DB2 UDB for iSeries SQL & Query Performance Tuning and Monitoring Workshop from IBM Learning Services at: http://www-1.ibm.com/servers/eserver/iseries/education/ 302 Advanced Functions and Administration on DB2 Universal Database for iSeries 10.1 A brief history of the database and SQL If you look back in history, you will find that the database was actually “invented”. It rapidly gained widespread acceptance. So much so, that today, virtually all commercial applications are based on the concepts of a database. Ever since this invention, programmers have been developing applications to use and maintain these databases in an organization. At the same time, the art of performance tuning databases has evolved. In high-level languages, programmers optimized access to their database files with keyed access paths. Keyed access was then coded in the program to provide record-level access. Users were not involved at this time. As standards evolved, databases became larger and were used for diverse purposes, including both transaction-based and data warehousing applications. Structured Query Language (SQL or query) has, at the same time, become the standard for database access. The scope and power of SQL delivers a standard interface to any database that supports SQL standards. DB2 UDB for iSeries continues to adopt and support the SQL standards. SQL can be used in pre-written applications on the iSeries server, as well as applications generated by users. Many interrogation tools running on PCs depend on the SQL interface to access data on the iSeries server. The (anticipated) spread of e-commerce will lead to even more situations where SQL statements are executed on the iSeries server. The need to optimize data access has never been greater. The database is in the public domain now and not reserved only for programmers. 10.2 Database tuning so far General performance tuning will always influence query performance. Therefore, general system usage, competition with other jobs, other queries, amount of memory, processor capacity, processor usage, and so on will always influence the performance of queries. Assuming that the work environment can be controlled on any given system, the challenge is to apply similar levels of control to the database to optimize the queries. Queries running on the iSeries server are processed through a query optimizer, which creates an access plan based on the information it has available. This access plan includes information about the tables to be accessed and how the query will attempt to access those tables. By reviewing this access plan, actions can be taken to influence the outcome, and therefore, the performance of the query. These actions can include the creation of indexes to support the query, or can involve changing the way that the query statements are structured to create a more efficient access plan. 10.2.1 Query optimizer debug messages The earliest approach, and probably one of the most widely used, is the analysis of query optimizer debug messages. Running the query under the influence of debug causes the query optimizer to write additional informational messages to the job log. Chapter 10. Visual Explain 303 By looking at the messages in the job log and reviewing the second-level text behind the messages, you can identify changes (for example, creating a new index) that could improve the performance of the query. Analysis of optimizer debug messages was made easier with the addition of a predictive query governor. By specifying a time limit of zero in the predictive query governor, query optimizer debug messages can be generated in the job log without actually running the query. This means that a query that may take 16 hours to run can be analyzed in a few seconds. Some changes can be made to the query or the database and the effect can be modelled on the query in just a few minutes. The query would then be run when the optimum implementation has been achieved. 10.2.2 Database Monitor More recently, query optimizer debug messages have been joined by the Database Monitor. The Database Monitor gathers query execution statistics from the iSeries server and records them in a database file. This database file is then analyzed to provide performance information to help tune the query or the database. The Database Monitor is accessed directly from the database component of Operations Navigator. It can also be accessed from a 5250 device using the Start Database Monitor (STRDBMON) command or during the collection of performance data with the STRPFRCOL command. The analysis of the statistics gathered can be done through the SQL Performance Monitors in Operations Navigator. Operations Navigator provides many pre-defined reports to assist with the analysis of the performance data collected in this manner. 10.2.3 The PRTSQLINF command For SQL embedded in program and package objects, the Print SQL Information (PRTSQLINF) CL command extracts the optimizer access method information out of the objects and places that information in a spooled file. The spooled file contents can then be analyzed to determine if any changes are needed to improve performance. 10.2.4 Iterative approach The analysis of queries and tuning for query optimization is an ongoing iterative process. There is no easy solution for query performance and no precise table to which you can refer for the answers. Much depends on a “try it and see” approach. With this approach, queries are analyzed, and changes are made to the environment. The query is run again, and the environment is adjusted. The process repeats until optimum performance is achieved. The task of database tuning is complete only when the following statements are all true:  Users and programmers are not generating any new queries.  All existing queries have been completely tuned.  Query selection, sort, and summarization do not change.  The iSeries server workload is stable.  The volume of data in the tables is stable.  The content of the tables is not changing. 304 Advanced Functions and Administration on DB2 Universal Database for iSeries 10.3 Introducing Visual Explain In Client Access Express, the database component of Operations Navigator is a graphical way to manage the database. Visual Explain has been added to the database component in V4R5M0. Visual Explain provides a graphical way to identify and analyze database performance. 10.3.1 What is Visual Explain Visual Explain provides a graphical representation of the optimizer implementation of a query request. The query request is broken down into individual components with icons representing each unique component. Visual Explain also includes information on the database objects considered and chosen by the query optimizer. Visual Explain’s detailed representation of the query implementation makes it easier to understand where the greatest cost is being incurred. Visual Explain shows the job run environment details and the levels of database parallelism that were used to process the query. It also shows the access plan in diagram form, which allows you to zoom to any part of the diagram for further details. If query performance is an issue, Visual Explain provides information that can help you to determine whether you need to:  Rewrite or alter the SQL statement  Change the query attributes or environment settings  Create new indexes Best of all, you do not have to run the query to find this information. Visual Explain has a modeling option that allows you to explain the query without running it. That means you could try any of the changes suggested and see how they are likely to work, before you decide whether to implement them. Visual Explain is an advanced tool to assist you with the task of enhancing query performance, although it does not actually do this task for you. You still need to understand the process of query optimization and the different access plans that can be implemented. 10.3.2 Finding Visual Explain Visual Explain is a component of Operations Navigator and is found in the Database section of Operations Navigator. To locate the database section of Operations Navigator, you need to establish a session on your selected iSeries server using the Operations Navigator icon. Many functions within Operations Navigator are obtained by right-clicking. For example, you can right-click the Database icon to gain access to several of the query functions (Figure 10-1). Selecting Run SQL Scripts invokes the SQL Script Center. From the SQL Script Center, Visual Explain can be accessed directly, either from the menu or from the toolbar. This is explained in 10.4.1, “The SQL Script Center” on page 306. Another way to access Visual Explain is through the SQL Performance Monitor. The SQL Performance Monitor is used to create Database Performance Monitor data and to analyze the monitor data with pre-defined reports. Figure 10-1 Database options under the Database icon Chapter 10. Visual Explain 305 Visual Explain works with the monitor data that is collected by the SQL Performance Monitor on that system or by the Database Performance Monitor (STRDBMON), which is discussed in 10.6, “Using Visual Explain with Database Monitor data” on page 318. Visual Explain can also analyze Database Performance Monitor data collected on other systems once that data has been restored on the iSeries server. 10.3.3 Data access methods and operations supported Visual Explain was shipped for the first time in V4R5M0. Table 10-1 shows the methods and operations that are supported by Visual Explain. Table 10-1 Query access functions supported Optimizer access plan Debug Visual Explain Non-keyed access methods Table Scan   Parallel Table Scan   Parallel Pre-fetch   Parallel Table Pre-load   Skip Sequential with dynamic bitmap   Parallel Skip Sequential   Keyed Data Access Methods Key Positioning and Parallel Key Positioning   Dynamic Bitmaps/Index ANDing ORing   Key Selection and Parallel Key Selection   Index-From-Index   Index-Only Access   Parallel Index Pre-load   Joining, Grouping, Ordering Nested Loop Join   Hash Join  * Index Grouping   Hash Grouping   Index Ordering   Sort   Query Statements Select   Update   Insert  * Delete   306 Advanced Functions and Administration on DB2 Universal Database for iSeries Each of the methods shown in this table (debug and Visual Explain) provide assistance with the task of debugging queries and the analysis of queries to optimize their performance. None of these will automatically perform the changes for you. The sole purpose is to provide you with the necessary information so you can make an informed choice. The ease of use that Visual Explain offers can quickly disguise the fact that it is an advanced tool working for you in a highly technical area. Use Visual Explain to assist you with the task of enhancing query performance. Although Visual Explain cannot sort out problems for you, it can help you to identify and solve problems in a more effective way. You still need to understand the process of query optimization, the different database access plans that can be implemented, and the effects of those plans on the system. You also need to understand the database that you are tuning, its use, and the impact of creating and changing indexes. 10.4 Using Visual Explain with the SQL Script Center The Run SQL Script window (SQL Script Center) provides a direct route to Visual Explain. The window is used to enter, validate, and execute SQL commands and scripts and to provide an interface with OS/400 through the use of CL commands. 10.4.1 The SQL Script Center To access the SQL Script Center, right-click the Database option in Operations Navigator to see the Database menu. Select Run SQL Scripts. The Run SQL Script window appears with the toolbar as shown in Figure 10-2. Reading from left to right, there are icons to create, open, and save SQL scripts, followed by icons to cut, copy, paste, and insert generated SQL (V5R1) statements within scripts. Sub Query  * Union  * View materialization  * Operational Characteristics Index Usage   Index Advice   Open Data Path Usage   Work Management details  Notes:  Supported for analysis with this method. * PTF # XXX is required for V4R5. You need to load the latest Database Group PTF to obtain the best functionality of Visual Explain. Optimizer access plan Debug Visual Explain Figure 10-2 Toolbar from the SQL Script Center Chapter 10. Visual Explain 307 The hour glass icons (green downward arrows in V4R5) indicate to run the statements in the Run SQL Scripts window. These options are also available under the Run menu (Figure 10-3). From left to right, they run all of the statements in the window (All), run all of the statements from the cursor to the end (From Selected), or run the single statement identified by the cursor position (Selected). To the right of the hour glasses in Figure 10-2 is a Stop button, which is colored red when a run is in progress. The final icon in the toolbar is the Print icon. This is followed by two Visual Explain icons, colored blue and green. The left Visual Explain icon (blue) is to explain the SQL statement. The right Visual Explain icon (green) is to run and explain the SQL statement. The actions that you will choose are explained in a moment. Both of these options are also available on the drop-down menu (Figure 10-4). You may choose either option to start Visual Explain. Another option exists on the Visual Explain pull-down menu to show recent SQL Performance Monitors. SQL Performance Monitors can be used to record SQL statements that are explainable by Visual Explain. We recommend access via the SQL Performance Monitors icon, because this provides the full list of monitors. An SQL script is defined as one or more statements from the Run SQL Script working area below the toolbar. An initial comment is provided. Each complete statement needs a delimiter to mark the end of statement. The SQL Script Center uses a semi-colon (;) for this purpose. 10.4.2 Visual Explain Only The Visual Explain Only option (Ctrl+E or the blue toolbar icon) submits the query request to the optimizer and provides a visual explanation of the SQL statement and the access plan that will be used when executing the statement. In addition, it provides a detailed analysis of the results through a series of attributes and values associated with each of the icons. See Figure 10-5. To optimize an SQL statement, the optimizer validates the statement and then gathers statistics about the SQL statement and creates an access plan. When you choose the Visual Explain Only option, the optimizer processes the query statement internally with the query time limit set to zero. Therefore, it proceeds through the full validation, optimization, and creation of an access plan and then reports the results in a graphical display. Note: When choosing Visual Explain Only, Visual Explain may not be able to explain some complex queries such as hash join, temp join results, etc. In this cases, users have to choose Run and Explain for the SQL statements to see the graphical representation. 10.4.3 Run and Explain The Run and Explain option (Ctrl+U or the green toolbar icon) also submits the query request to the optimizer, and provides a visual explanation of the SQL statement and the access plan that will be used when executing the statement. It provides a detailed analysis of the results through a series of attributes and values associated with each of the icons. Figure 10-3 SQL Script Center Run options Figure 10-4 SQL Script Center Visual Explain options Figure 10-5 Visual Explain access 308 Advanced Functions and Administration on DB2 Universal Database for iSeries However, it does not set the query time limit to zero and, therefore, continues with the execution of the query. This leads to the display of a results window in addition to the Visual Explain graphics. 10.5 Navigating Visual Explain The Visual Explain graphics window (Figure 10-6) is presented in two parts. The left-hand side of the display is called the Query Implementation Graph. This is the graphical representation of the implementation of the SQL statement and the methods used to access the database. The arrows indicate the order of the steps. Each node of the graph has an icon that represents an operation or values returned from an operation. The right-hand side of the display has the Query Attributes and Values. The display corresponds to the object that has been selected on the graph. Initially, the query attributes and values correspond to the final results icon. The vertical bar that separates the two sides is adjustable. Each side has its own window and is scrollable. Figure 10-6 Visual Explain Query Implementation Graph and Query Attributes and Values Notes:  Visual Explain may show a representation that is different from the job or environment where the actual statement was run since it may be explained in an environment that has different work management settings.  If the query is implemented with multiple steps (that is, joined into a temporary file, with grouping performed over it), the Visual Explain Only option cannot provide a valid explanation of the SQL statement. In this case, you must use the Run and Explain option. Chapter 10. Visual Explain 309 The default settings cause the display to be presented with the final result icon (a checkered flag) on the left of the display. Each of the icons on the display has a description and the estimated number of rows to be used as input for each stage of the implementation. Clicking any of the icons causes the Query Attributes and Values display to change and present the details that are known to the query for that part of the implementation. You may find it helpful to adjust the display to see more of the attributes and values. Query attributes and values are discussed further in 10.5.5, “Visual Explain query attributes and values” on page 315. When you right-click any of the icons on the display, an action menu is displayed. The action menu has options to assist with query information and can provide a short cut to table information to be shown in a separate window. More details are shown in 10.5.2, “Action menu items” on page 310. The following action menu items may be found selectively on different icons:  Table Description: Displays table information returned by Display File Description (DSPFD).  Index Description: Displays index information returned by DSPFD.  Create Index: Creates a permanent index on the iSeries server.  Table Properties: Displays object properties.  Index Properties: Displays object properties.  Display Query Environment: Displays environment settings used during the processing of this query.  Additional fly-over panels: These exist for many of the icons. By moving the mouse pointer over the icon, a window appears with summary information on the specific operation. See Figure 10-7. Figure 10-7 Table scan fly-over panel The Visual Explain toolbar (Figure 10-8) helps you navigate the displays. The first four icons (from left to right) help you control the sizing of the display. The left-most icon scales the graphics to fit the main window. For many query implementations, this leaves the graphical display too small to be of value. The next two icons allow you to zoom in and out of the graphic image. The fourth icon (Overview) creates an additional window Figure 10-9 that shows the Visual Explain graphic on a reduced scale. This window has a highlighted area, which represents the part of the image that is currently displayed in the main window. Figure 10-8 Visual Explain toolbar 310 Advanced Functions and Administration on DB2 Universal Database for iSeries In the Overview window (Figure 10-9), you can move the cursor into this highlighted area that is shown in the main window. The mouse pointer changes so you can drag the highlighted area to change the section of the overall diagram that is shown in the main window. The default schematic shows the query with the result on the left, working across the display from right to left, to allow you to start at the result and work back. The remaining four icons on the Visual Explain toolbar allow you to rotate the query implementation image. The icons are:  Starting from the right, leading to the result on the left (default view)  Starting from the left, leading to the result on the right  Starting at the bottom, leading to the result at the top  Starting from the top, leading to the result at the bottom Try these icons to see which style of presentation you prefer. Starting in V5R1, a frame at the bottom of the main Visual Explain window was added. In this frame, you can see two tabs. The Statement Text tab shows the analyzed SQL statement. Also in V5R1, when Visual Explain is used, it activates the Include Debug Messages in Job Log option and conveniently presents those messages under the Optimizer Messages tab. 10.5.1 Menu options The menu options above the toolbar icons are File, View, Actions, and Help. The File option allows you to close the window. Starting on V5R1, the ability to either print or save the Visual Explain output as an SQL Performance Monitor file was added. The View options generally replicate the toolbar icons. The additional options are:  Icon spacing (horizontal or vertical) changes the size of the arrows between the icons.  Arrow labels allow you to show/hide the estimated number of rows that the query is processing at each stage of the implementation.  Icon labels allow you to show/hide the description of the icons.  Highlight expensive icons (new in V5R1) by number of returned rows.  Highlight advised indexes (new in V5R1). The Actions menu item replicates the features that are available on the display. 10.5.2 Action menu items When you right-click a query implementation icon, a menu appears that offers further options. These options may include one of more of the following items. Table Description The Table Description menu item (Figure 10-10) takes you into the graphical equivalent of the Display File Description (DSPFD) command. From here, you can find out more information about the file. The description has several tabs to select to find further information. A limited number of changes can be made from the different tab windows. Figure 10-9 Visual Explain Overview window Chapter 10. Visual Explain 311 Figure 10-10 Table Description Table Properties The Table Properties display (Figure 10-11) shows a list of the columns and their attributes from the table icons. A limited number of changes are allowed from the window. Figure 10-11 Table Properties 312 Advanced Functions and Administration on DB2 Universal Database for iSeries Index Description The Index Description attributes can be accessed to obtain further information about the index. Several changes are allowed to an index from these windows, including access path maintenance settings. The Index Description display is shown in Figure 10-12. Figure 10-12 Index Description Index Properties The Index Properties window (Figure 10-13) shows the columns that exist in the table. A sequential number is placed next to the columns that form the index, with an indication of whether the index is ascending or descending. The display also shows the type of index. Figure 10-13 Index Properties Chapter 10. Visual Explain 313 Create Index From the temporary index icon, the Create Index menu item takes you to a dialogue box where the attributes of the temporary index have been completed (Figure 10-14). Simply click OK to create a permanent index that mirrors the temporary index created by the query. Figure 10-14 New Index on Table display You need to enter an index name. The type of index is assumed to be binary radix with non-unique keys. Note: The Create Index menu item is available from any icon where an index is advised (for example, table scan, key positioning, key selection) in addition to the temp index icon. This is one of the user-friendly features of Visual Explain, which gives you the ability to easily create an index that the optimizer has suggested. 10.5.3 Controlling diagram level of detail Starting on V5R1, users can select how much detail they want to see on the Visual Explain graphs. The VISUAL_EXPLAIN_DIAGRAM row on the QAQQINI file lets you change the level of detail in Visual Explain. When it is set to *BASIC or *DEFAULT, it shows only the icons directly related to the query. When it is set to *DETAIL, it also shows the icons that are indirectly related to the query, such as table scans performed to build temporary indexes. Figure 10-15 shows these two versions of explanation for the same sample query. Most users will be satisfied with the *BASIC diagram while others, with more performance tuning experience, may prefer the *DETAIL diagram. 314 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 10-15 Basic and detailed Visual Explain comparison In Figure 10-15, the table scan on the upper right of the detailed graph is part of the creation of the temporary index. Some other differences between *BASIC and *DETAIL are:  If the single table query uses key positioning, key selection, and table scan, *DETAIL would show an icon for key positioning, key selection, and table scan. *BASIC would show only one icon – key positioning.  If the single table query uses key positioning, *DETAIL shows two icons – key positioning and table probe. *BASIC would show only the key positioning icon. 10.5.4 Displaying the query environment The query environment is available as a fast path from the Final Results icon and shows the work management environment (Figure 10-16) where the query was executed. This information can also be obtained from the Query Attributes and Values displays. Basic Visual Explain Example Detailed Visual Explain Example Chapter 10. Visual Explain 315 Figure 10-16 Environment 10.5.5 Visual Explain query attributes and values The query attributes and values show further information about the optimizer implementation of the query. If you select an icon from the Query Implementation graph, you obtain information about that icon, as well as that part of the query implementation. We selected a few of the query implementation icons to show you the query attributes and values. This way, you can see exactly how much information Visual Explain collects. Prior to Visual Explain, the information was often available, but never in one place. Table name, base table name, index name This section shows the name and library of the table being selected (Figure 10-17). If the table name is a long name (MASTERCUSTOMER), the name of the table being queried and the member of the table will be the short name (MASTE00001). The long name is in a separate line titled “Long Name of the Table being queried”. Figure 10-17 Table name Estimated processing time and table info The estimated processing time (Figure 10-18) shows the time the optimizer expects to take from this part of the query. Figure 10-18 Estimated processing time 316 Advanced Functions and Administration on DB2 Universal Database for iSeries Estimated rows selected and query join info The estimated rows selected (Figure 10-19) shows the number of rows the optimizer expects to output from this part of the query. If the query is only explained, it shows an estimate of the number of rows. If it is run and explained, it actually shows the number of rows selected. It also shows whether the query is CPU or I/O bound, which is information that was not accessible prior to Visual Explain. Figure 10-19 Estimated rows selected Queries can be very CPU-intensive or I/O-intensive. When a query’s constraint resource is the CPU, it is called CPU bound. When a query’s constraint resource is the I/O, it is called I/O bound. A query that is either CPU or I/O bound gives you the opportunity to review the query attributes being used when the query was processing. If SMP is installed on a multi-processor system, you should review the DEGREE parameter to ensure that you are using the systems resources effectively. Information about the index scan performed This display shown in Figure 10-20 provides the essentials about the index that was used for the query, including the reason for using the index, how the index is being used, and static index attributes. It also specifies the access method or methods used such as Index Scan - Key positioning, Index Scan - Key Selection, and Index Only Access. To find the description of the different reason codes, refer to the manual DB2 UDB for iSeries Database Performance and Query Optimization. Figure 10-20 Index scan SMP parallel information The SMP information (Figure 10-21) shows the degree of parallelism that occurred on this particular step. It may appear for more than one icon, because multiple steps can be processed with differing degrees of parallelism. The display also shows whether either parallel pre-fetch or parallel pre-load was used as part of the parallel processing. This information is only relevant when the DB2 SMP licensed feature is installed. The parallel degree requested is the number of parallel tasks that the optimizer used. This is a user setting defined with CHGQRYA, but the optimizer adjusts it based on the system resources. Chapter 10. Visual Explain 317 Figure 10-21 SMP parallel information Index advised information The Index advised section (Figure 10-22) tells you whether the query optimizer is advising the creation of a permanent index. If an index is being advised, the number and names of the columns to create the index are suggested. This is the same information that is returned by the CPI432F optimizer message. If the Highlight Index Advised option is set, advised index information, like base table name, library, and involved columns, will be easily identifiable, as shown in the Figure 10-22. Figure 10-22 Index advised Note that it is possible for the query optimizer to not use the suggested index, if created. This suggestion is generated if the optimizer determines that a new index might improve the performance of the selected data by 1 microsecond. Information about temporary index created This display provides information about the creation of a temporary index as part of the query optimizer implementation (Figure 10-23). The index created is reusable and specifies if a temporary index creation is allowing the associated ODP to be used. If the key column field names of the index are missing, this implies that derived fields were used. Figure 10-23 Temporary index Additional information about SQL statement The display in Figure 10-24 shows information about the SQL environment that was used when the statement was captured. The SQL environment parameters can impact query performance. Many of these settings are taken from the ODBC/JDBC driver settings. The Statement is Explainable specifies if the SQL statement can be explained by the Visual Explain tool. In V4R5, not all statements are explainable. In this section, you will find the SQL statement if you selected the Final Select icon. 318 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 10-24 Additional information Implementation summary for SQL statement The Implementation summary for SQL statement (Figure 10-25) provides information about the type of SQL statement being processed. It also identifies functions that have a particular influence on the optimizer. It specifies the values of the variables used on the SQL statement (Host Variable Values). If the SQL Statement has an Order By, Group By, or a Join, it specifies which implementation was used. In the example in Figure 10-25, the SQL statement did not have an Order By, Group By, or join operation. Figure 10-25 Implementation summary 10.6 Using Visual Explain with Database Monitor data Database Monitor data is query information that has been recorded by one of the DB2 UDB for iSeries performance monitors into a database table that can be analyzed later. Multiple Database Performance Monitors may run on the iSeries at the same time. They can either record information for individual jobs or for the entire system. Each one is individually named and controlled. Any given job can be monitored by a maximum of one system monitor and one job monitor. The Database Performance Monitor can be started from Operations Navigator or with a CL command. With Operations Navigator, the SQL Performance Monitors component is used to collect Database Monitor data. If you want to use Visual Explain with the data collected with an SQL Performance Monitor, then you must choose the detailed monitor collection when setting up the Database Performance Monitor in Operations Navigator. Chapter 10. Visual Explain 319 The Start Database Monitor (STRDBMON) or Start Performance Monitor (STRPFRMON) (with STRDBMON(*YES)) CL commands can also be used to collect Database Performance Monitor data. If you intend to use Visual Explain on the Database Monitor data collected with these CL commands, the data must be imported into Operations Navigator as detailed data. See 7.6, “SQL Performance Monitors” on page 220, for a detailed explanation on how to use SQL Performance Monitor and how to import DBMON data into Operations Navigator. Using Visual Explain Click Operations Navigator-> Database-> SQL Performance Monitors to obtain a list of the SQL Performance Monitors that are currently on the system. Right-click the Performance Monitor, and select List Explainable Statements. An “explainable” statement (Figure 10-26) is an SQL statement that can be explained by Visual Explain. Because Visual Explain does not process all SQL statements, it is possible that some statements will not be selected. Figure 10-26 SQL explainable statements The explainable SQL statements that have been optimized by the job are now listed. If you have been monitoring an SQL Script window, these will be the SQL statements that were entered. To use Visual Explain on any of the statements, select the statement from the display. The full SQL statement appears in the lower part of the display for verification. Click Run Visual Explain (Figure 10-26) to analyze the statement, and prepare a graphical representation of the query. Note: Query optimizer information is only generated for an SQL statement or query request when an ODP is created. When an SQL or query request is implemented with a Reusable ODP, then the query optimizer is not invoked. Therefore, there will be no feedback from the query optimizer in terms of monitor data or even debug messages and the statement will not be explainable in Visual Explain. The only technique for analyzing the implementation of a statement in Reusable ODP mode is to look for an earlier execution of that statement when an ODP was created for that statement. 320 Advanced Functions and Administration on DB2 Universal Database for iSeries Exit the Visual Explain window and the Explainable Statements window when you have completed your analysis. You may either retain the performance data or remove it from the system at this time, depending on your requirements. 10.7 Non-SQL interface considerations Obviously, the Database Performance Monitor can capture implementation information for any SQL-based interface. Therefore, any SQL-based request can be analyzed with Visual Explain. SQL-based interfaces range from Embedded SQL to Query Manager reports to ODBC and JDBC. Some query interfaces on the AS/400 and iSeries servers are not SQL-based and, therefore, are not supported by Visual Explain. The interfaces not supported by Visual Explain include:  Native database access from a high level language, such as Cobol, RPG, etc.  Query  OPNQRYF command  OS/400 Create Query API (QQQQRY) The query optimizer creates an access plan for all queries that run on the iSeries server. Most queries use the SQL interface, and generate an SQL statement, either directly (SQL Script Window, STRSQL command, SQL in high-level language (HLL) programs) or indirectly (Query Monitor/400). Other queries do not generate identifiable SQL statements (Query, OPNQRYF command) and cannot be used with Visual Explain via the SQL Performance Monitor. In this instance, the name SQL, as part of the SQL Performance Monitor, is significant. The statements that generate SQL and can be used with the Visual Explain via the SQL Performance Monitor include:  SQL statements from the SQL Script Center  SQL statements from the Start SQL (STRSQL) command  SQL statements processed by the Run SQL Statement (RUNSQLSTM) command  SQL statements embedded into a high level language program, such as Cobol, Java, or RPG  SQL statements processed through an ODBC or JDBC interface The statements that do not generate SQL and, therefore, cannot be used with Visual Explain via the SQL Performance Monitor include:  Native database access from a high level language, for example, Cobol, RPG, etc.  Query  Open Query File (OPNQRYF) command  OS/400 Create Query API (QQQQRY) 10.7.1 Query/400 and Visual Explain Query/400, now renamed Query, is not supported by Visual Explain even though optimizer debug messages can be used with Query/400 queries since it does not generate SQL. Query/400 queries are often blamed for poor performance and sometimes even banned from execution during daylight hours. It is for this reason that some guidance has been provided to bring Query/400 queries into the scope of Visual Explain. Chapter 10. Visual Explain 321 There is no direct Query/400 to SQL command. However, the Start Query Monitor Query (STRQMQRY) CL command will run a query definition (object type *QRYDFN) as an SQL statement, as long as the ALWQRYDFN parameter is set to either *YES or *ONLY. If you are accessing a multi-member file, performance data is not collected for the second and subsequent members. Instead, you need to use an SQL supported interface, such as an alias for the members. To use this SQL statement with Visual Explain, either start an SQL Performance Monitor for this job in advance of issuing the STRQMQRY command, or use the native STRDBMON CL command to collect data for the job. See 7.6.1, “Starting the SQL Performance Monitor” on page 222. 10.7.2 The Visual Explain icons The icons that you may encounter on the Visual Explain query implementation chart are shown here. The Final Result icon displays the original SQL statement and summary information of how the query was implemented. It is the last icon on the chart. The Table Scan icon indicates that all rows in the table were paged in, and selection criteria was applied against each row. Only those rows meeting the selection criteria were retrieved. To obtain the result in a particular sequence, you must specify the ORDER BY clause. The Parallel Table Scan icon indicates that a table scan access method was used and multiple tasks were used to fill the rows in parallel. The table was partitioned, and each task was given a portion of the table to use. The Skip Sequential Table Scan icon indicates that a bitmap was used to determine which rows would be selected. No CPU processing was done on non-selected rows, and I/O was minimized by bringing in only those pages that contained rows to be selected. This icon usually is related to the Dynamic Bitmap or Bitmap Merge icons. The Skip Sequential Parallel Table Scan icon indicates that a skip sequential table scan access method was used and multiple tasks were used to fill the rows in parallel. The table was partitioned, and each task was given a portion of the table to use. The Derived Column Selection icon indicates that a column in the row selected had to be mapped or derived before selection criteria could be applied against the row. Derived column selection is the slowest selection method. The Parallel Derived Column Selection icon indicates that derived field selection was performed, and the processing was accomplished using multiple tasks. The table was partitioned, and each task was given a portion of the table to use. The Index Key Positioning icon indicates that only entries of the index that match a specified range of key values were “paged in”. The range of key values was determined by the selection criteria whose predicates matched the key columns of the index. Only selected key entries were used to select rows from the corresponding table data. 322 Advanced Functions and Administration on DB2 Universal Database for iSeries The Parallel Index Key Positioning icon indicates that multiple tasks were used to perform the key positioning in parallel. The range of key values was determined by the selection criteria, whose predicates matched the key columns of the index. Only selected key entries were used to select rows from the corresponding table data. The Index Key Selection icon indicates that all entries of the index were paged in. Any selection criteria, whose predicates match the key columns of the index, was applied against the index entries. Only selected key entries were used to select rows from the table data. The Parallel Index Key Selection icon indicates that multiple tasks were used to perform key selection in parallel. The table was partitioned, and each task was given a portion of the table to use. The Encoded Vector Index icon indicates that access was provided to a database file by assigning codes to distinct key values, and then representing these values in an array (vector). Because of their compact size and relative simplicity, Encoded Vector Indexes provide for faster scans. The Parallel Encoded Vector Index icon indicates that multiple tasks were used to perform the encoded vector index selection in parallel. This allows for faster scans that can be more easily processed in parallel. The Sort Sequence icon indicates that selected rows were sorted using a sort algorithm. The Grouping icon indicates that selected rows were grouped or summarized. Therefore, duplicate rows within a group were eliminated. The Nested Loop Join icon indicates that queried tables were joined together using a nested loop join implementation. Values from the primary file were joined to the secondary file by using an index whose key columns matched the specified join columns. This icon is usually after the method icons used on the underlying tables (that is, Index scan-Key selection and Index scan-Key positioning). The Hash Join icon indicates that a temporary hash table was created. The tables queried were joined together using a hash join implementation where a hash table was created for each secondary table. Therefore, matching values were hashed to the same hash table entry. The Temporary Index icon indicates that a temporary index was created, because the query either requires an index and one does not exist, or the creation of an index will improve performance of the query. The Temporary Hash Table icon indicates that a temporary hash table was created to perform hash processing. The Temporary Table icon indicates that a temporary table was required to either contain the intermediate results of the query, or the queried table could not be queried as it currently exists and a temporary table was created to replace it. Chapter 10. Visual Explain 323 10.8 SQL performance analysis using Visual Explain This section presents a brief example on SQL performance analysis using Visual Explain. A complete explanation on performance analysis is beyond the scope of this redbook, but you can find extensive information on Redpapers and workshops at: http://www-1.ibm.com/servers/eserver/iseries/library/ 10.8.1 Database performance analysis methodology There are many different methods to identify problems and tune troublesome database queries. One of the most common methods is to identify the most dominating, time-consuming queries and work on each of them individually. Another method is to leverage global information and to use this information to look for indexes that are “begging” to be created. Operations Navigator SQL Performance Monitor provides you with tools for gathering and analyze SQL performance information. Once you have SQL performance data collected, you can use the predefined queries for looking for specific queries that have large table scans or that are evidencing some lack of indexes. Those predefined queries can be reached by right-clicking the specific SQL Performance Monitor collected and selecting Analyze Results as shown in Figure 10-27. The Dynamic Bitmap icon indicates that a bitmap was dynamically generated from an existing index. It was then used to determine which rows were to be retrieved from the table. To improve performance, dynamic bitmaps can be used in conjunction with a table scan access method for skip sequential or with either the index key position or key selection. The Bitmap Merge icon indicates that multiple bitmaps were merged or combined to form a final bitmap. The merging of the bitmaps simulates boolean logic (AND/OR selection). The DISTINCT icon indicates that duplicate rows in the result were prevented. You can specify that you do not want any duplicates by using the DISTINCT keyword, followed by the selected column names. The UNION Merge icon indicates that the results of multiple subselects were merged or combined into a single result. The Subquery Merge icon indicates that the nested SELECT was processed for each row (WHERE clause) or group of rows (HAVING clause) selected in the outer level SELECT. This is also referred to as a “correlated subquery”. The Incomplete Information icon indicates that a query could not be displayed due to incomplete information. 324 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure 10-27 Analyzing SQL performance results The Basic Statement Information predefined query gives you a very general idea of the queries being monitored, as well as the kind of access methods used by these queries. This reports provides you information related to execution time per each execution, total execution time, advised indexes, whether table scan or temporary index creation was used, and more. Once you detect a query or set of queries that needs further analysis, you can use a detailed query analysis tool like Visual Explain to explore them in detail. Query analysis is iterative in nature. Try something running the job or the individual query to see if it worked. Try it again if it did not work. You can explain with Visual Explain the SQL statements contained in the SQL Performance Monitor Collected data by right-clicking the specific collection and selecting List Explainable Statements from the pop-up menu. A list of explainable statements appears and you can choose those in which you are interested. As an example, Figure 10-28 shows a Visual Explain diagram that permits you to detect that for this query. It is performing a table scan and is not using parallelism. You can see that the SQL statement does not specify an OPTIMIZE FOR n ROWS portion, and the query degree is set to *NONE. Chapter 10. Visual Explain 325 Figure 10-28 Analyzing a simple query: First iteration Based on the information provided by Visual Explain, you change the statement including now an OPTIMIZE FOR ALL ROWS, and we change the parallel degree to *OPTIMIZE. See Figure 10-29. Figure 10-29 Analyzing a simple query: Second iteration No OPTIMIZE FOR n ROWS used. Parallel degree (or query degree) is set to NONE Now the statement includes an OPTIMIZE FOR ALL ROWS affecting the optimizer’s access plan. Using parallelism. Optimizer is asking for 10 parallel threads An index is advised, using fields YEAR, MONTH and RETURNFLAG 326 Advanced Functions and Administration on DB2 Universal Database for iSeries You can see the affect that the changes had over the analyzed query. You can go further and create the suggested index by right-clicking the Table Scan icon and selecting the Create Index option. A New Index on Table window appears (Figure 10-30), where the suggested fields are selected for you. You have to provide a name and library for the new index. You can also change the order of the fields and add new fields to the index if you consider that necessary. Figure 10-30 New Index on Table window Now DB2 UDB for iSeries uses the suggested index, as shown in Figure 10-31. Note that it is possible that DB2 UDB for iSeries may not use the suggested index. Figure 10-31 Analyzing a simple query: Third iteration Chapter 10. Visual Explain 327 Is it really that simple? Tuning SQL statements and database performance can be a very demanding task, but with the new tools introduced in V4R5 and improved in V5R1, such as Visual Explain and SQL Performance Monitor predefined reports, it is becoming more accessible. Performance tuning, particularly when dealing with database operations, is an iterative process but the availability and knowledge of powerful tools allow the performance analyst to find a solution quickly. Knowledge and judicious usage of the OS/400 Database Monitor tool, its predefined queries, and particularly Visual Explain reduces significantly the time and effort required by performance analysts. 328 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 329 Appendix A. Order Entry application: Detailed flow This appendix provides detailed flow charts of each of the modules included in the Order Entry application scenario. A 330 Advanced Functions and Administration on DB2 Universal Database for iSeries Program flow for the Insert Order Header program Figure A-1 shows a functional description of the various components of this application scenario. The DB2 UDB for iSeries functional highlights in this program include:  Referential integrity constraints for the Order Header table  Insert trigger on the Order Header file Figure A-1 Insert Order Header program flow Program description for the Insert Order Header program The idea of this program is to show how to use the following new database functions in a real application:  Referential integrity: When a record is inserted in the Order Header file, the system checks for an existing customer in the Customer table.  Database trigger: Before the insert operation is completed, the database manager activates a program that can verify if the sales representative is assigned to the customer and log any violation attempt.  Program description: The sales person periodically calls the customer over the phone and places an order. The sales person enters the customer number, the order and delivery date, and other general information. The application does not automatically generate an order number. For the sake of simplicity, this is entered by the sales representative. SEND MESSAGE CUSTOMER # NOT VALID SEND MESSAGE SALES PERSON / CUSTOMER RELATIONSHIP NOT VALID A TAKE INPUT FROM SCREEN INSERT ORDER HEADER OK RI ? OK TO INSERT B RI TRIGGER ON INSERT CHECK RELATIONSHIP WRITE AUDIT TRAIL DIFFERENT ACTIVATION GROUP Appendix A. Order Entry application: Detailed flow 331 A more detailed flow of this program is described as follows: 1. The program inserts a row into the Order Header table. 2. If the database referential constraint enforcement detects a customer number not defined in the Customer table, a program message is sent explaining that the customer number is invalid. A correct customer number must be entered. 3. The customer name is displayed at the terminal. 4. A row is inserted into the Order Header table. 5. Since an insert trigger is defined on this table, a program is automatically triggered by the database manager. 6. The trigger program checks if the current user profile is associated to the customer in the Sales/Customer table. If there is no match, the program writes an audit trail entry to an audit table. 7. If the insert is successful, the program returns a positive return code to the main program, which calls the Insert Order Detail program. Program flow for the Insert Order Detail program DB2 Universal Database for iSeries functional highlights in this program include:  Referential integrity constraints for the Order Detail table  Referential integrity constraints for the Stock table (on remote system)  Two-phase Commit and DRDA Level 2  Remote stored procedure The program flow for Insert Order Detail is shown in Figure A-2. 332 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure A-2 Insert Order Detail program flow Program description for Insert Order Detail program The idea of this program is to show how to use the following new database functions in a real application:  Referential integrity: When a record is inserted into the Order Detail table for a new order item, the system checks for a matching order number in the Order Header table.  Two-phase commit with DRDA, Level 2: This procedure inserts a record in a local file and updates the remote inventory file (STOCK file). At the end of this process, you want to release the locks on the inventory record and the transaction is committed. The two-phase commit support guarantees the integrity of this transaction.  Stored procedure: To update the remote inventory file, this program calls a remote stored procedure. The stored procedure checks the availability of the product. If the product has low inventory levels, the stored procedure looks for an alternative and sends the new product code and description back to the calling application. The selected product information is displayed at the terminal and the user has the choice of accepting or rejecting the substitute item. ROLLBACK RI Y N Y 2-PHASE COMMIT C SET CONNECTION REMOTE CONNECT CALL STORED PROCEDURE SET CONNECTION N N STORED PROCEDURE INSERT ORDER DETAIL CANCEL ? MORE ? E Y CHECK ORDER # TAKE PRODUCT NUMBER FROM SCREEN C FROM ORDER HEADER PROGRAM: CUSTOMER #, ORDER # CANCEL ORDER BEGIN ? MORE ITEMS ? ALT ITEM OK? CANCEL ? E B N N N Y Appendix A. Order Entry application: Detailed flow 333  Program description: This program can: a. Get the customer number and the order number from the Insert Order Header program. b. Get the product number and quantity for every single item from the display. c. Issue a SET CONNECTION statement to the remote system. All the necessary CONNECT statements are performed by the main program. d. Call a stored procedure at the remote system to: • Look for the product number in the remote inventory. • Update the Stock table, reducing the quantity on hand if the quantity available is sufficient. • Look for an alternative product if the requested one is out of stock, and update the corresponding quantity. • Pass the product information back to the calling program. e. The stored procedure then passes control back to the calling program. f. At this point, the program sets a connection to the local system and if the user accepts the record, the new item is inserted in the Order Detail file, and the whole transaction is committed. If the user rejects the item, a rollback brings the stock quantity on hand back to its original value. g. A rollback is also performed if referential integrity checking on the Order Detail table fails. This happens if you insert the record with the wrong order number. h. The user also has the option of cancelling the whole order. In this case, a Cancel Order program is called. i. The program keeps a work field with the final totals of the whole order. When the entire order is completed, this value is passed to the next program – Finalize Order. Program flow for the Finalize Order program The DB2 Universal Database for iSeries functional highlights in this program include the trigger on the Update Order Header row. See Figure A-3 for the program flow. 334 Advanced Functions and Administration on DB2 Universal Database for iSeries Figure A-3 Finalize Order program flow Program description for the Finalize Order program The idea of this program is to show how to use the following new database functions in a real application:  Database triggers: In this scenario, a program is triggered after the order header row is updated with the total amount of the order. This program prints the invoice at the branch office as soon as the order has been completed. The program also updates the credit limit on the customer file. If the current balance exceeds 90% of the credit limit, a “warning” fax is automatically sent to the customer by a trigger program to allow the customer to take the appropriate actions (for example, applying for a credit limit increase, based on the credit history of the customer).  Program description: This program can: a. Get the customer number and the order number from the previous process along with the order grand total. A MORE ORDERS ? COMMIT END OK ROLLBACK ? SEND MESSAGE N UPDATE ORDER HEADER UPDATE SALES/ CUSTOMERS UPDATE CUSTOMERS OK ? TRIGGER ON UPDATE FAX CREDIT LIMIT >= ORDER TOTAL = OK DELETE ORDER SEND MESSAGE A CASCADE DELETE ORDER DETAIL TRIGGER ON UPDATE INVOICE WRITING CHECK CREDIT LIMIT READ CUSTOMER FROM ORDER DETAIL PROGRAM CUSTOMER#, ORDER #, ORDER TOTAL TAKE INPUT E Appendix A. Order Entry application: Detailed flow 335 b. Check the customer record. If the credit limit is exceeded, the order is cancelled. To delete the order, the detail is scanned, and the inventory quantity that is on hand for each item is updated by adding the amount reserved for this order. When this process is complete, the order header is deleted, and all the order detail disappears as a result of the *CASCADE constraint on the order header file. The entire transaction is finally committed. Again, the two-phase commit support ensures that the local database and the remote stock file are kept synchronized. c. If the credit limit is OK, this program updates the following fields: • The total amount in the customer file to keep track of the customer balance • The total amount in the Sales Representative/Customer table to reflect the sales person's turnover with the customer • The total amount in the Order Header table items at invoice time d. Because an update trigger is specified on the Order Header table, an invoice program is started immediately. The invoice for the completed order is printed in the branch office. For more information about triggers, see Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. e. After the preceding updates have been done, COMMIT is executed. f. If there are more orders, the Insert Order Header program is started again. g. If there are no more orders, this Order Entry application has ended. 336 Advanced Functions and Administration on DB2 Universal Database for iSeries © Copyright IBM Corp. 1994, 1997, 2000, 2001 337 Appendix B. Referential integrity: Error handling example This appendix provides an example of a COBOL program that illustrates a coding example of the error handling when you use referential integrity. In the following example, you can see a COBOL SQL implementation of this a procedure. The operation that activates the trigger and the referential integrity check is highlighted in bold. Immediately after the SQL insert, the application checks the SQLCODE for errors and reports the correct message to the user. B 338 Advanced Functions and Administration on DB2 Universal Database for iSeries Program code: Order Header entry program – T4249CINS PROCESS OPTIONS. IDENTIFICATION DIVISION. PROGRAM-ID. T4249CINS. AUTHOR. PROGRAMMER NAME. INSTALLATION. ITSC LABORATORY. DATE-WRITTEN. APRIL 2001. DATE-COMPILED. ENVIRONMENT DIVISION. CONFIGURATION SECTION. SOURCE-COMPUTER. IBM-AS400. OBJECT-COMPUTER. IBM-AS400. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT T4249OHRD ASSIGN TO WORKSTATION-T4249OHRD ORGANIZATION IS TRANSACTION FILE STATUS IS STATUS-ERR. ********************************************************** DATA DIVISION. FILE SECTION. FD T4249OHRD LABEL RECORD ARE STANDARD. 01 DSP01. COPY DDS-ALL-FORMATS OF T4249OHRD. *********************************************************** WORKING-STORAGE SECTION. 01 DSPFIL-INDICS. COPY DDS-ALL-FORMATS-INDIC OF T4249OHRD. 77 IND-ON PIC 1 VALUE B"1". 77 IND-OFF PIC 1 VALUE B"0". 01 JOBA-AREA. 03 BYTES-RTN PIC 9(8) BINARY VALUE 0. 03 BYTES-AVAIL PIC 9(8) BINARY VALUE 0. 03 JOBNAME PIC X(10). 03 USERNAME PIC X(10). 03 JOBNUMBER PIC X(6). *=================================================* * Parameters for retrieve job atributes - USERID * *=================================================* 01 RTV-JOBA. 03 RTV-JOB-VAR PIC X(50). 03 RTV-JOB-LEN PIC 9(8) BINARY VALUE 50. 03 RTV-JOB-FMT PIC X(8) VALUE "JOBI0400". 03 RTV-JOB-NAME PIC X(26) VALUE "*". 03 RTV-JOB-ID PIC X(16) VALUE " ". 01 STATUS-ERR PIC XX. 01 ORDNUM PIC X(5). 01 CUSTOMER PIC X(5). 01 ODATE PIC X(10). 01 ODLY PIC X(10). Appendix B. Referential integrity: Error handling example 339 01 OTOTAL PIC S9(9)V9(2) COMP-3. 01 INSERTOK PIC 9. EXEC SQL INCLUDE SQLCA END-EXEC. LINKAGE SECTION. 01 CUSTNBR PIC X(5). 01 ORDNBR PIC X(5). 01 RTCODE PIC X. *========================================================* *This program has three output parameters: Customer numb.* *Order number and Return code. The return code can be: * *Rtcode = 0 - OK Rtcode = 2 - F3 * *========================================================* PROCEDURE DIVISION USING CUSTNBR, ORDNBR, RTCODE. DECLARATIVES. TRANSACTION-ERROR SECTION. USE AFTER STANDARD ERROR PROCEDURE T4249OHRD. WORK-STATION-ERROR-HANDLER. GOBACK. END DECLARATIVES. MAIN-LINE SECTION. OPEN I-O T4249OHRD. PERFORM INITIAZ-HEADER. *=============================================* * Call API to get job atributes and move the * * output parameter into the work area * *=============================================* CALL "QUSRJOBI" USING RTV-JOB-VAR, RTV-JOB-LEN, RTV-JOB-FMT, RTV-JOB-NAME, RTV-JOB-ID. MOVE RTV-JOB-VAR TO JOBA-AREA. MOVE "0" TO RTCODE. MOVE 0 TO INSERTOK. MOVE IND-OFF TO IN15 IN ORDER-I-INDIC. WRITE DSP01 FORMAT IS "EXITLINE". PERFORM ORDER-ENTRY UNTIL IN15 IN ORDER-I-INDIC EQUAL IND-ON OR INSERTOK EQUAL 1. IF IN15 IN ORDER-I-INDIC = IND-ON THEN MOVE "2" TO RTCODE ELSE IF INSERTOK = 1 THEN MOVE "0" TO RTCODE. *===============================================================* *We are not closing the file, because we are overlapping screens* *===============================================================* * CLOSE T4249OHRD. GOBACK. 340 Advanced Functions and Administration on DB2 Universal Database for iSeries ORDER-ENTRY. PERFORM WRITE-READ-ORDER. MOVE ORHNBR OF ORDER-I TO ORDNUM. MOVE CUSNBR OF ORDER-I TO CUSTOMER. MOVE ORHDTE OF ORDER-I TO ODATE. MOVE ORHDLY OF ORDER-I TO ODLY. MOVE ZEROS TO OTOTAL. MOVE CUSTOMER TO CUSTNBR. MOVE ORDNUM TO ORDNBR. IF IN15 IN ORDER-I-INDIC NOT EQUAL IND-ON THEN * * The programs inserts an order in ORDERHDR file. * EXEC SQL INSERT INTO ORDENTL/ORDERHDR VALUES(:ORDNUM, :CUSTOMER, :ODATE, :ODLY, :OTOTAL, :USERNAME) :rk.4:erk. END-EXEC IF SQLCODE EQUAL 0 THEN MOVE 1 TO INSERTOK ELSE *==========================================================* * After the insert operation, you should monitor the * * following SQLCODEs: * * SQL0530(-530) - Referential Integrity violation * * SQL0803(-803) - Order Header already exists * * SQL0443(-443) - Trigger program signalled an exception * *==========================================================* IF SQLCODE EQUAL -530 THEN MOVE IND-ON TO IN98 OF ORDER-O-INDIC MOVE SPACES TO ORHNBR OF ORDER-O MOVE CUSTOMER TO CUSNBR OF ORDER-O ELSE IF SQLCODE EQUAL -803 THEN MOVE IND-ON TO IN99 OF ORDER-O-INDIC ELSE MOVE IND-ON TO IN97 OF ORDER-O-INDIC. ************************************************************* INITIAZ-HEADER. MOVE SPACES TO ORHNBR OF ORDER-O. MOVE SPACES TO CUSNBR OF ORDER-O. MOVE "0001-01-01" TO ORHDTE OF ORDER-O. MOVE "0001-01-01" TO ORHDLY OF ORDER-O. WRITE-READ-ORDER. WRITE DSP01 FORMAT IS "ORDER" INDICATORS ARE ORDER-O-INDIC. MOVE IND-OFF TO ORDER-I-INDIC ORDER-O-INDIC. READ T4249OHRD RECORD INDICATORS ARE ORDER-I-INDIC. © Copyright IBM Corp. 1994, 1997, 2000, 2001 341 Appendix C. Additional material This redbook also contains additional material that is available on the Web. See the following sections for instructions on using or downloading the Web material. Locating the Web material The Web material associated with this redbook is available in softcopy on the Internet from the IBM Redbooks Web server. Point your Web browser to: ftp://www.redbooks.ibm.com/redbooks/SG244249 Alternatively, you can go to the IBM Redbooks Web site at: ibm.com/redbooks Select the Additional materials and open the directory that corresponds with the redbook form number, SG244249. Using the Web material The additional Web material that accompanies this redbook includes the following files: File name Description dbadvfun.exe iSeries and client source code image readme.txt Readme documentation System requirements for downloading the Web material The following list contains the most important requirements:  iSeries requirements – OS/400 Version 5 Release 1 – 5722-ST1 - DB2 Query Manager and SQL Development kit – 5722-SS1 - Host Servers C 342 Advanced Functions and Administration on DB2 Universal Database for iSeries  PC software – Windows 95/98, Windows NT, or Windows 2000 – Client Access Express for Windows – PC5250 Emulation How to use the Web material Create a subdirectory (folder) on your workstation, and unzip the contents of the Web material zip file into this folder. The readme.txt contains the instructions for restoring the iSeries libraries and directories, as well as installing the PC clients and run-time notes. © Copyright IBM Corp. 1994, 1997, 2000, 2001 343 Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 344.  DB2/400: Mastering Data Warehousing Functions, SG24-5184  AS/400 Internet Security: Implementing AS/400 Virtual Private Networks, SG24-5404  DB2 UDB for AS/400 Object Relational Support, SG24-5409  Cross-Platform DB2 Stored Procedures: Building and Debugging, SG24-5485  Managing AS/400 V4R4 with Operations Navigator, SG24-5646  Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503 The following IBM Redbooks will be available in first quarter 2002:  Managing OS/400 with Operations Navigator V5R1 Volume 1: Basic Functions, SG24-6226  Managing OS/400 with Operations Navigator V5R1 Volume 2: Advanced Functions, SG24-6227 Other resources These publications are also relevant as further information sources:  DB2 Connect Personal Edition Quick Beginning, GC09-2967  COBOL/400 User’s Guide, SC09-1812  COBOL/400 Reference, SC09-1813  ILE RPG Programmer’s Guide, SC09-2074  ILE RPG Reference, SC09-2077  DB2 UDB Application Development Guide V6, SC09-2845  AS/400 National Language Support, SC41-5101  Backup and Recovery, SC41-5304  Work Management, SC41-5306  Distributed Data Management, SC41-5307  Client Access Express for Windows, SC41-5509  ILE Concepts, SC41-5606  SQL Programming Guide, SC41-5611  SQL Reference, SC41-5612  Database Programming, SC41-5701  Distributed Database Programming, SC41-5702 344 Advanced Functions and Administration on DB2 Universal Database for iSeries  DDS Reference, SC41-5712  Control Language Programming, SC41-5721  CL Reference, SC41-5722  System API Programming, SC41-5800  SQL Call Level Interface, SC41-5806  DB2 UDB for iSeries Database Performance and Query Optimization: http://submit.boulder.ibm.com/pubs/html/as400/bld/v5r1/ic2924/index.htm Referenced Web sites These Web sites are also relevant as further information sources:  iSeries Information Center: http://www.iseries.ibm.com/infocenter  DB2 Universal Database for iSeries main page: http://www.iseries.ibm.com/db2/db2main.htm  PartnerWorld for Developer - iSeries site: http://www.iseries.ibm.com/developer  Support Line Knowledge Base: http://as400service.ibm.com/supporthome.nsf/document/10000051  Data Movement Utilities Guide and Reference: http://www-4.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/ document.d2w/report?fn=db2v7dmdb2dm07.htm#HDREXPOVW  iSeries Library: http://www-1.ibm.com/servers/eserver/iseries/library  IBM Learning Services: http://www-1.ibm.com/servers/eserver/iseries/education/ How to get IBM Redbooks Search for additional Redbooks or Redpieces, view, download, or order hardcopy from the Redbooks Web site: ibm.com/redbooks Also download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become Redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. IBM Redbooks collections Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web site for information about all the CD-ROMs offered, as well as updates and formats. © Copyright IBM Corp. 1994, 1997, 2000, 2001 345 Special notices References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. 346 Advanced Functions and Administration on DB2 Universal Database for iSeries PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others. © Copyright IBM Corp. 1994, 1997, 2000, 2001 347 Index Symbols *DUW option 93 *FILE object 160 *RUW option 93 *SHRUPD 35 Numerics 01222 status 51 A access path 5, 25, 181 activation group 92, 93 activation group ID 96 active jobs 118 add column 186 Add Relational Database Directory Entry (ADDRDBDIRE) command 110 Add Server Authentication Entry (ADDSVRAUTE) command 111 adding multiple constraints 32 referential constraint 28 relational database directory entry 110 server authentication entry 111 ADDPFCST (Add Physical File Constraint) command 28 ADDRDBDIRE (Add Relational Database Directory Entry) command 110 ADDSVRAUTE (Add Server Authentication Entry) command 111 administrative interface 124 advanced functions 6, 11 advanced journal attributes 180 Advised Index 230 alias 168, 182 ALTER TABLE SQL statement 28, 188 ALTER TABLE statement DROP clause 56 Analyze Results panel 228 analyzing SQL Performance Monitor results 228 application design 101 application example 12 application flow using DRDA-2 94 application integrity 22 application message 78 application requester (AR) 84, 110 application server (AS) 84, 93 apply journal changes 49, 53 AR (application requester) 84 AS (application server) 84 ASP (auxiliary storage pool) 167 authentication information 111 automatic recovery 96 auxiliary storage pool (ASP) 167 B breadth cascade 43 business rules referential integrity 22 translated to physical file constraints 32 C C ILE program referential integrity messages 52 CASCADE delete rule 35 CASCADE example 39 cascade network 28 CASCADE rule 23, 25 catalog inquiry 63 CCSID (Coded Character Set Identifier) 187, 205 Change DDM TCP/IP Attributes (CHGDDMTCPA) command 109 Change Physical File Constraint (CHGPFCST) command 53 Change Query Attributes 164, 217 Change Query Attributes (CHGQRYA) command 137 Change Server Authentication Entry (CHGSVRAUTE) command 112 check constraint 7, 24, 67, 192 application message 78 DB2 UDB for iSeries 69 defining 70 I/O message 77 integration into applications 77 management 79 state 80 tips and techniques 82 check pending 53 condition 71 defined 24 protection from 49 CHGDDMTCPA (Change DDM TCP/IP Attributes) command 109 CHGPFCST (Change Physical File Constraint) command 53 CHGQRYA (Change Query Attributes) command 137 CHGSVRAUTE (Change Server Authentication Entry) command 112 Client Access/400 6 COBOL deleting an order 103 DRDA-2 program example 102 ILE program, referential integrity messages 52 Coded Character Set Identifier (CCSID) 187, 205 coexistence for DRDA-1 and DRDA-2 95 collection 166 column 5 command 181 CHGDDMTCPA 109 STRTCPSVR SERVER(*DDM) 110 348 Advanced Functions and Administration on DB2 Universal Database for iSeries command, CL 206 Add Physical File Constraint (ADDPFCST) 28 Add Relational Database Directory Entry (ADDRDBDIRE) 110 Add Server Authentication Entry (ADDSVRAUTE) 111 ADDRDBDIRE (Add Relational Database Directory Entry) 110 ADDSVRAUTE (Add Server Authentication Entry) 111 Change Server Authentication Entry (CHGSVRAUTE) 112 Copy From Import File (CPYFRMIMPF) 126 Copy To Import File (CPYTOIMPF) 126 CRTSQLpgm 93 CRTSQLxxx 101 Print SQL Information (PRTSQLINF) 303 Remove Server Authentication Entry (RMVSVRAUTE) 112 Start Debug (STRDBG) 118 Work with Active Jobs (WRKACTJOB) 118 commit 204 commit group 204 commit mode 204 COMMIT(*NONE) 95 commitment control 43, 204 requirements 25 commitment definition 97 condition clause 70 condition clause of check constraint 75 CONNECT 88 SQL statement 84, 92 CONNECT (Type 1) 92 CONNECT (Type 2) 93 connection current 88 dormant 88 held 88 multiple handling in DRDA-2 101 preserved 7 released 88 states 88 connection DRDA 101 connection management 86 methods 88 on DB2 UDB for iSeries 87 consistency of data in multiple locations 90 constraint 181, 188, 192 commands 53 displaying information 61 domain 68 enforcement 35 management 52 prerequisites 24 referential integrity network example 32 removing 56 self-referencing 34 states 52 table 68 tips 192 types 22 unique or primary key 29 Control Center 149 copy 183 Copy From Import File (CPYFRMIMPF) command 126 Copy To Import File (CPYTOIMPF) command 126, 138 CPF502D notify message 51 CPF502E notify message 51 CPF503A notify message 51 CPF523B escape message 51 CPF523C escape message 51 CPU bound 316 CPYFRMIMPF (Copy From Import File) command 126 CPYTOIMPF (Copy To Import File) command 126, 138 create journal 177 Create Physical File (CRTPF) command 170 create SQL package 114 CREATE TABLE SQL statement 28, 160 CREATE VIEW 160, 162 creating a referential constraint 28 CRTPF (Create Physical File) command 170 CRTSQLpgm 93 CRTSQLpgm command 93 CRTSQLxxx command 101 current SQL statement 218 current state 89 cut 183 cyclic constraint 24 D data access distributed 84 distributed environment 84 methods 305 data consistency 90 Data Description Specifications (DDS) 5 data field 5 data format 127, 139 data inconsistencies 49 data load Data Definition Language example 133 file definition file example 130 data loss 49 data source translation 203, 205 data, invalid check pending 53 database administration 157 Database functions 163 database library functions 165 Database Monitor 303 Visual Explain 318 Database Navigator 8, 239 locator pane 247 map pane 249 menu options 249 system requirements and planning 240 Database Navigator map 244 creating 261 display 253 generating SQL 289 icons 269 Index 349 interface 246 table options 255 database performance analysis methodology 323 database relationship 242 database synchronization on multiple systems 96 Database task pad 246 database tuning 302 DB2 Connect 120 AS/400 port number 123 CCSID for user profile 121 over TCP/IP 120 DB2 family 4 DB2 UDB 7.2 data migration to DB2 UDB for iSeries 149 DB2 UDB for iSeries 3 advanced functions 6 check constraint 69 distributed environment 5 DRDA-2 86 Import utility 126 journaling 124 moving data to DB2 UDB 7.2 152 Operations Navigator 161 overview 4 programming languages 272 sample schema 8 SQL support for connection management 92 DDM (Distributed Data Management) 5 DDM server job 109, 122 DDS (Data Description Specifications) 5 debug messages 302 debug mode 220 default libraries 204 define check constraint 72 DEFINED constraint state 52 DEL (delimited ASCII file) 149 delete 183, 193 delete column 186 delete constraint 56 delete parent record example 36, 38 delete rows 183 delete rule CASCADE, SET NULL, and SET DEFAULT 35 defined 23 deleted record 170 deleting an order 103 DRDA-2 and two-phase commitment control 103 COBOL example 103 delimited ASCII file (DEL) 149 delimited import file 128 dependent file defined 23 same file as parent file 34 dependent table 68 depth cascade 43 detail rows 105 DISABLED constraint state 52 DISCONNECT 90 DISCONNECT statement 93 Display Check Pending Status (DSPCPCST) command 54 Display Database Relations (DSPDBR) command 61 Display Journal Entry Details display 45 Display Physical File Description (DSPFD) command 61 distributed data access 84 Distributed Data Management (DDM) 5 distributed database example 14 distributed database network 87 distributed environment 5 data access in 84 Distributed Relational Database Architecture (DRDA) 6, 84, 201 distributed relational database example 13 Distributed Request (DR) 86 Distributed Unit of Work (DUW) 7, 85 DLTPCT parameter 170 domain constraint 68 dormant state 89 DR (Distributed Request) 86 DRDA 6, 83, 84 application server 108 COMMIT(*NONE) 95 Distributed Unit of Work 7 initial connections 101 level 0 85 level 1 85 level 2 85 level 3 86 DRDA (Distributed Relational Database Architecture) 6, 84, 201, 202 DRDA over TCP/IP 108 troubleshooting 117 DRDA-1 86 coexistence with DRDA-2 95 moving to DRDA-2 101 DRDA-2 86 application flow example 94 Coexistence 95 coexistence with DRDA-1 95 CONNECT 92 connection management 87 connection management method 88 Connection Management on DB2 UDB for iSeries 87 DISCONNECT, DB2 UDB for iSeries 90, 93 performance 101 program example 102 protected conversation 90 RDB Connection Management Method 88 RELEASE, DB2 UDB for AS/400 93 SET CONNECTION 94 Synchronization Point Manager (SPM) 90 two-phase commit 90 unprotected conversation 90 DRDA-2 and two-phase commitment control 105 drop active connections 93 DROP clause of ALTER TABLE statement 56 DSPCPCST (Display Check Pending Status) command 54 DSPDBR (Display Database Relations) command 61 DSPFD (Display Physical File Description) command 61 DUW (Distributed Unit of Work) 7, 85 350 Advanced Functions and Administration on DB2 Universal Database for iSeries E Edit Check Pending Constraints (EDTCPCST) command 58 edit recovery for access path 181 Edit SQL 176 EDTCPCST (Edit Check Pending Constraints) command 58 ENABLED constraint state 52 error handling example 337 escape message 51 ESTABLISHED constraint state 52 example 103 application flow using DRDA-2 94 CASCADE 39 delete parent record 36 no RESTRICT or NOACTION rule 38 Display Journal Entry Details display 45 distributed relational database 13 DRDA-2 program, COBOL 102 inserting the detail rows 105 logical consistency 13 multiple constraints 32 Order Entry application overview 12 referential integrity network 32 SQL CREATE TABLE 29 unmatched foreign key values 28 Explainable Statement 231 Export API 151 Export command 151 Export utility 125, 138, 149, 152 F failure recovery 96 field definition file 127 field level authority 162 file availability 28 Finalize Order program 333 flyover 254 foreign key 35 constraint prerequisites 24 defined 23 in same physical file as primary key 34 foreign key value verification 28 function, user defined 168 G Generate SQL 271, 272 from Database Navigator map 289 from DDS 297 Operations Navigator 276 to PC and data source files 281 H held state 89 hierarchical structure 34 I I/O bound 316 I/O messages 50, 77 ILE C example 114 ILE C programs 52 ILE COBOL programs 52 ILE program 102 ILE RPG programs 51 implicit primary key constraint 31 import file 138 Import utility 125, 126, 149, 153 Include Debug Messages in Job Log 214 Include Error Message Help in Run History 213 index 168, 181, 188 indexes for referential integrity 25 Informix 86 initial DRDA connection 101 Insert Order Detail program 331 Insert Order Header program 330 insert rows 183 inserting detail rows 105 integrated exchange file (IXF) 149 integrated relational database 4 Interactive SQL 113 invalid data check pending 53 IXF (integrated exchange file) 149 J Java Database Connectivity (JDBC) 202 Java stored procedures See also stored procedures Java JDBC (Java Database Connectivity) 202 job log 97 JOIN statement 214 journal 167, 177, 181, 192 journal changes 49 journal entries with referential integrity 45 journal entry 177 journal example 177 journal receiver 177, 178, 181, 192 journaling 43, 124 journaling requirements 25 journals 168 K key constraints 188 key types defined 22 keyed access path 26 keyed logical file 5 L lab exercise 230 Level Check (LVLCHK) parameter 186 library 166, 167 library name 208 library-based functions 168 like operating environments 84 loader utility 126 Index 351 locator pane 247 lock file 28 locked rows 196 locking files 35 log not written 95 logical consistency example 13 logical file 5 logical transaction 85 Logical Unit of Work ID 96 loss of data 49 M manual recovery 97 map pane 249 mapping referential integrity messages 52 maximum members 170 member size 170 messages CPF502D 51 CPF502E 51 CPF503A 51 CPF523B 51 CPF523C 51 referential integrity 50 Modify Selected Queries 229 multiple connections 7 multiple constraints 32 multiple databases 7 multiple locations, data consistency in 90 N network coexistence of DRDA-1/DRDA-2 95 referential integrity or cascade 28 new journal receiver 193 attributes 181 no RESTRICT or NOACTION rule 38 NOACTION rule 35 defined 24 delete example without 38 enforcement 35 non-SQL interface considerations 320 notify messages 51 NULLID collection 121 O object-based function 181 Objects to Display window 253 ODBC (Open Database Connectivity) 6, 202 Open 182 Open Database Connectivity (ODBC) 6, 202 openness 86 Operations Navigator Generate SQL 276 new V5R1 features 159 Visual Explain 301 OPM programs 101 Oracle 86 Order Entry application 11, 12 advanced database functions 17 database 14 detailed flow 329 Order Entry example 12 Order Header entry program 338 orphan foreign key values example 28 OS/400 collection 166 OS/400 library 166 ownership of access path 25 P parallel data load data format 127, 139 delimited import file 128 field definition file 127 source file (FROMFILE) 127, 138 target file (TOFILE) 127, 139 parallel data loader 137 parent file defined 23 same file as dependent file 34 parent key constraint prerequisites 24 defined 23 identifying 29 parent record delete example 36 no RESTRICT or NOACTION rule 38 parent table 68 PC user integrity 22 performance 93 benefits of system provided referential integrity 22 DRDA-2 considerations 101 improved 25 referential integrity application impacts 50 when adding referential constraint 28 performance collection files 205 Permissions 193 permissions 168, 181 physical data 5 physical file 5, 32, 170 add multiple constraints 32 constraints referential integrity network example 32 port number for DRDA connection 123 predictive query governor 303 primary key 23, 29 constraint 23, 29, 31 defined in SQL 29 in same physical file as foreign key 34 properties 168, 193 protected conversation 90, 93 protocols 84 PRTSQLINF (Print SQL Information) command 303 Q QBATCH 207 QRWTLSTN job 110, 117 352 Advanced Functions and Administration on DB2 Universal Database for iSeries QRWTSRVR job 117 query attributes and values 315 query environment Visual Explain query environment 314 query optimizer 220 debug messages 302 Query/400 234, 320 quick view 182 R RDB Connection Management Method 93 RDB parameter 101 RDBCNNMTH parameter 93 RDBCNNMTH(*DUW) 88 RDBCNNMTH(*RUW) 88 read lock 35 record 5 record field 5 record selection 5 recovery 96 automatic 96 from failure 96 manual 97 Work with Commitment Definitions (WRKCMTDFN) command 98 Redbooks Web site 344 Contact us xii Ref Constraint parameter 45 referential constraint 28, 188 creating 29 defined 23 dependent file 31 enforcement 35 example 30 rules 23 referential cycle 24 referential integrity 17, 21, 25, 49 application considerations 50 check pending 53 concepts 22 constraint 18, 24, 68 concept 18 prerequisites 24 constraint management 52 constraint tips and techniques 82 defined 7, 23 error handling 337 example 13 I/O messages 50 introduction 22 journal entries 45 journaling and commitment control 43 message handling in applications 51 messages in RPG ILS programs 51 network 28 relationship 24 restoring data 49 rules ordering 36 SQLCODE values 52 verification queries 28 referential integrity messages in ILE C programs 52 in ILE COBOL programs 52 in ILE RPG programs 51 relational database directory 110 directory entry 110 integration overview 4 RELEASE statement 93 released state 89 remote journal 177, 181, 193 remote locations 84 remote request 85 Remote Request (RR) 85 remote stored procedure 114 using DRDA over TCP/IP 113, 114 Remote Unit of Work (RUW) 85, 93 remove constraint 56 remove internal entries 181 remove journal changes 49, 53 Remove Physical File Constraint (RMVPFCST) command 56 Remove Server Authentication Entry (RMVSVRAUTE) command 112 reorganize 182 reorganize file/table 170 restoring data 49 RESTRICT rule 25, 35 defined 24 delete example without 38 enforcement 35 retain server security data 111 reused connections 93 REUSEDLT parameter 170 reverse engineering 271, 272 RMVPFCST (Remove Physical File Constraint) command 56 RMVSVRAUTE (Remove Server Authentication Entry) command 112 RNQ1222 inquiry message 51 RNX1222 escape message 51 rollback 87, 204 root value 34 row 5 RPG ILE program, referential integrity messages 51 RR (Remote Request) 85 rules 84 enforcement 35 ordering for referential integrity 36 referential integrity 22 Run and Explain option 307 Run History pane 200 Run SQL Script DDM/DRDA configuration summary 216 example using VPN journal 208 Run SQL Scripts 164, 166, 182, 183, 197 Run option 210 running CL in SQL Scripts 208 Index 353 S save and restore 53, 59, 81 Screen Edit Utility (SEU) 131 self study lab 160 self-referencing constraint 34 semantics 84 SET CONNECTION 89 SET CONNECTION statement 94 SET DEFAULT delete rule 35 SET DEFAULT rule 24 SET NULL delete rule 35 SET NULL rule 23 SEU (Screen Edit Utility) 131 shared access path for referential integrity 25 shared lock 35 side-effect journal entry 45 SMAPP 181 Smart Statement Selection 213 SMP (Symmetric MultiProcessing) 137 source file (FROMFILE) 127, 138 span multiple databases 7 SPM (Synchronization Point Manager) 86, 90 SQL 84, 225 collection 167 connect statements 92 index 25 SQL (Structured Query Language) 5 SQL CREATE TABLE statement example 29 SQL index 5 SQL naming convention (operational difference) 206 SQL performance analysis 323 SQL Performance Monitor 213, 220, 228 analyzing summary results 228 detailed monitor analysis 230 reviewing results 226 SQL procedure 168 SQL script running a CL command 206 tips for running CL 208 SQL Script Center 306 SQL statements CONNECT 84 CREATE TABLE/ALTER TABLE 28 SQL TABLE 170 SQL table 5 SQL Trigger 188 SQL view 5 SQL VIEW example 172 SQL-92 standard 68 SQLCODE values 52 Start Debug (STRDBG) command 118 starting and ending journaling 193 starting the SQL Performance Monitor 222 status 01222 51 Stop on Error 213 stored procedures 7 STRDBG (Start Debug) command 118 STRTCPSVR SERVER(*DDM) command 110 Structured Query Language (SQL) 5 swap receivers 193 Sybase 86 Symmetric MultiProcessing (SMP) 137 Synchronization Point Manager (SPM) 86, 90 system failure check pending 53 system naming convention, operational difference 206 T table 162, 167, 168, 181, 182 table constraint 68 table options 255 target file (TOFILE) 127, 139 task pad 246 TCP/IP 108 application requester 110 configuring on the application server 109 DB2 Connect access to iSeries 120 transaction atomicity 43 transaction isolation 95 tree relationship among records in database 34 triggers 7, 181, 188 troubleshooting DRDA over TCP/IP 117 two-phase commitment control 7, 83, 86, 90 needs assessment 18 U unique constraint 23, 29 defined in SQL 29 unique key 23 unit of recovery (UR) 85 unit of work (UoW) 85, 87 unlike operating environments 84 unprotected connections 93 unprotected conversation 90 UoW (unit of work) 85 update lock 35 update row 183 update rule 24 UR (unit of recovery) 85 user defined function 168 user defined type 168 user-defined relationship 266 V verification of foreign key value 28 verification queries 28 view 167, 168, 182 view of physical data 5 View Results button 229 Visual Explain 212, 301, 304 data access methods 305 Database Monitor data 318 icons 321 navigation 308 query attributes and values 315 Query/400 320 SQL performance analysis 323 SQL Script Center 306 Visual Explain Only option 307 354 Advanced Functions and Administration on DB2 Universal Database for iSeries VPN journal 208 W Work with Active Jobs (WRKACTJOB) command 118 Work with Commitment Definition (WRKCMTDFN) command 98 Work with Physical File Constraints (WRKPFCST) command 57 worksheet format file (WSF) 149 writing for DRDA programs 84 WRKACTJOB (Work with Active Jobs) command 118 WRKCMTDFN (Work with Commitment Definition) command 97, 98 WRKPFCST (Work with Physical File Constraints) command 57 WSF (worksheet format file) 149 X XDB Systems 86 (0.5” spine) 0.475”<->0.873” 250 <-> 459 pages Advanced Functions and Administration on DB2 Universal Database for iSeries ® SG24-4249-03 ISBN 0738422320 INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks Advanced Functions and Administration on DB2 Universal Database for iSeries Learn about referential integrity and constraints See how Database Navigator maps your database Discover the secrets of Visual Explain Dive into the details of DB2 Universal Database for iSeries advanced functions and database administration. This IBM Redbook equips programmers, analysts, and database administrators with all the skills and tools necessary to take advantage of the powerful features of the DB2 Universal Database for iSeries relational database system. It provides suggestions, guidelines, and practical examples about when and how to effectively use DB2 Universal Database for iSeries. This redbook contains information that you may not find anywhere else, including programming techniques for the following functions:  Referential integrity and check constraints  DRDA over SNA, DRDA over TCP/IP, and two-phase commit  DB2 Connect  Import and Export utilities This redbook also offers a detailed explanation of the new database administration features that are available with Operations Navigator in V5R1. Among the tools, you will find:  Database Navigator  Reverse engineering and Generate SQL  Visual Explain  Database administration using Operations Navigator With the focus on advanced functions and administration in this fourth edition of the book, we moved the information about stored procedures and triggers into a new redbook – Stored Procedures and Triggers on DB2 Universal Database for iSeries, SG24-6503. Back cover

ibm.com/redbooks IBM AS/400 Printing V Alain Badan Simon Hodkin Jacques Hofstetter Gerhard Kutschera Bill Shaffer Whit Smith A primer on AS/400 printing in today’s networked environment Configuration, performance, problem determination, enhancements In-depth education on AFP and ASCII printing International Technical Support Organization SG24-2160-01 IBM AS/400 Printing V October 2000 © Copyright International Business Machines Corporation 1998, 2000. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Second Edition (October 2000) The document was created or updated on June 12, 2001. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Before using this information and the product it supports, be sure to read the general information in Appendix L, “Special notices” on page 407. Take Note! © Copyright IBM Corp. 2000 iii Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Chapter 1. Printing on the AS/400 system. . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.1 Output queues: Spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2 Data streams supported on the AS/400 system. . . . . . . . . . . . . . . . . . . . . .3 1.3 Printer writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 1.3.1 Print writer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.3.2 Print Services Facility/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 1.3.3 Host print transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 1.3.4 Image print transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 1.4 AS/400 printer attachment methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 1.4.1 Printers attached to AS/400 workstation controllers or IBM 5x94. . . .15 1.4.2 IPDS printers LAN-attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 1.4.3 ASCII printers attached to displays . . . . . . . . . . . . . . . . . . . . . . . . . .17 1.4.4 ASCII printers attached to PCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 1.4.5 ASCII printers LAN-attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 1.4.6 Printers attached to PSF Direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 1.4.7 Printers attached to PSF/2 DPF . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 1.5 Remote system printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 1.6 Printing SCS, IPDS, AFPDS, and USERASCII spooled files . . . . . . . . . . .23 1.6.1 SCS spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 1.6.2 IPDS spooled files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 1.6.3 AFPDS spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 1.6.4 USERASCII spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 1.6.5 USERASCII spooled files with image print transform. . . . . . . . . . . . .26 1.7 Implementing a printing concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 1.7.1 Print criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 1.7.2 Print output requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 1.7.3 Printer file device type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 1.7.4 Writer supporting printer file device type . . . . . . . . . . . . . . . . . . . . . .28 1.7.5 Printer requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 1.7.6 Types of printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 1.7.7 Printer attachment methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 1.7.8 What must be considered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Chapter 2. Advanced Function Presentation. . . . . . . . . . . . . . . . . . . . . . . .35 2.1 Overview of AFP on the AS/400 system . . . . . . . . . . . . . . . . . . . . . . . . . .35 2.1.1 What AFP is . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 2.1.2 AS/400 AFP model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 2.1.3 APU print model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 2.1.4 PFU print model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39 2.1.5 Page and form definitions print model . . . . . . . . . . . . . . . . . . . . . . . .41 2.1.6 AFP toolbox print model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 2.2 AFP resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 2.2.1 Creating AFP resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43 2.2.2 OEM products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 2.3 AFP Utilities/400 V4R2 enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . .45 2.3.1 View electronic form on PC (Overlay Utility) . . . . . . . . . . . . . . . . . . .45 2.3.2 Print Format Utility ‘Omit Back Side Page Layout’ . . . . . . . . . . . . . . .47 iv IBM AS/400 Printing V 2.3.3 Element repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.3.4 Form definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.3.5 Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.3.6 Printer type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.3.7 Host outline font support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.4 Advanced Print Utility (APU) enhancements . . . . . . . . . . . . . . . . . . . . . . 49 2.4.1 Duplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.2 Multiple Text Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4.3 Outline font support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.4.4 Advanced Print Utility (APU) monitor enhancement . . . . . . . . . . . . . 52 2.4.5 Print engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Chapter 3. Enhancing your output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1 How your print output could look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2 Using Advanced Print Utility (APU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 APU environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.2 Setting up APU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.3 Creating the print definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.2.4 Working with the print definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.2.5 Testing the print definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.2.6 Printing using the APU monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.3 Using the Page Printer Formatting Aid. . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3.1 Creating a source physical file for form and page definitions . . . . . . 82 3.3.2 Compiling the form and page definitions . . . . . . . . . . . . . . . . . . . . . 84 3.3.3 Printing with the form and page definitions. . . . . . . . . . . . . . . . . . . . 86 3.3.4 Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.4 APU versus PPFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Chapter 4. Fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.1 Where fonts are stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.1.1 Printer-resident fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.1.2 Host-resident fonts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2 How fonts are selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.2.1 Characters per inch (CPI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3 Which fonts are available. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.1 Fonts supplied at no charge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.3.2 240-pel fonts available at a charge . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 300-pel fonts available at a charge . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.4 How fonts are installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.4.1 Making the fonts available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.5 Outline fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.5.1 Downloading host-resident outline fonts. . . . . . . . . . . . . . . . . . . . . 100 4.5.2 Why use an outline font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.3 Scalable fonts for MULTIUP and COR . . . . . . . . . . . . . . . . . . . . . . 101 4.6 Font substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6.1 Suppressing font substitution messages . . . . . . . . . . . . . . . . . . . . 102 4.7 Font table customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.7.1 Creating the font tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.7.2 Adding a font table entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.7.3 Other font table commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.7.4 Customer-defined font ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.8 Disabling resident font support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.9 Using a resource library list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 v 4.10 Font capturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108 4.10.1 Font resources eligible for capture . . . . . . . . . . . . . . . . . . . . . . . .108 4.10.2 Marking a font resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 4.10.3 Defining the printer for font capture . . . . . . . . . . . . . . . . . . . . . . . .110 4.10.4 Considerations for font capture . . . . . . . . . . . . . . . . . . . . . . . . . . .110 4.11 Creating AFP fonts with Type Transformer . . . . . . . . . . . . . . . . . . . . . .110 Chapter 5. The IBM AFP Printer Driver . . . . . . . . . . . . . . . . . . . . . . . . . . .117 5.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117 5.1.1 Why use the AFP Printer Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . .117 5.2 Installing the AFP Printer Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118 5.2.1 Installation from the World Wide Web . . . . . . . . . . . . . . . . . . . . . . .121 5.3 Creating an overlay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122 5.4 Creating a page segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126 5.5 Text versus image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129 5.6 Other AFP Printer Driver tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130 5.6.1 Using the Images dialog box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130 5.6.2 File transfer of AFP resources using FTP . . . . . . . . . . . . . . . . . . . .130 5.6.3 Problem solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131 5.6.4 Performance of the AFP Printer Driver . . . . . . . . . . . . . . . . . . . . . .134 5.6.5 Creating AFP documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134 Chapter 6. Host print transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 6.1 Host print transform overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 6.2 Host print transform enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 6.3 Host print transform process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 6.4 Enabling host print transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 6.5 SCS to ASCII transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 6.6 AFPDS to ASCII transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 6.6.1 Mapping mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 6.6.2 Raster mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146 6.6.3 Processing AFP resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 6.6.4 Processing AFPDS barcodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148 6.6.5 How AFPDS to ASCII transform handles a no-print border . . . . . . .149 6.6.6 AFPDS to TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 6.6.7 Transform spooled file and write to folder . . . . . . . . . . . . . . . . . . . .150 6.6.8 AFPDS to ASCII transform limitations . . . . . . . . . . . . . . . . . . . . . . .150 6.7 Host print transform customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151 6.8 New and enhanced tags for WSCST objects. . . . . . . . . . . . . . . . . . . . . .152 6.9 New MFRTYPMDL special values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154 6.10 DBCS support in host print transform . . . . . . . . . . . . . . . . . . . . . . . . . .156 6.10.1 DBCS SCS to ASCII transform . . . . . . . . . . . . . . . . . . . . . . . . . . .156 6.10.2 DBCS AFPDS to ASCII transform . . . . . . . . . . . . . . . . . . . . . . . . .157 6.10.3 New tags and supported data streams for DBCS. . . . . . . . . . . . . .157 Chapter 7. Image print transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161 7.1 Image print transform function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161 7.2 Why use image print transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162 7.3 Image print transform process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163 7.3.1 Where output attributes are derived . . . . . . . . . . . . . . . . . . . . . . . .165 7.4 Printing with the image print transform function. . . . . . . . . . . . . . . . . . . .165 7.4.1 Printing to an ASCII printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165 7.4.2 Printing to an IPDS printer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 7.4.3 Sending the spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 vi IBM AS/400 Printing V 7.5 Image configuration objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.5.1 Values of image configuration objects . . . . . . . . . . . . . . . . . . . . . . 166 7.6 Printing with the convert image API . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.7 Converting PostScript data streams. . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.7.1 Fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.7.2 User-supplied fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 7.7.3 Font substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 7.8 Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Chapter 8. Remote system printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 8.1 Remote system printing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 8.2 AS/400 system and TCP/IP LPR-LPD printing . . . . . . . . . . . . . . . . . . . . 172 8.2.1 Creating the output queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 8.2.2 Destination options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 8.2.3 Separator pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 8.2.4 ‘Load Letter’ message on the printer . . . . . . . . . . . . . . . . . . . . . . . 179 8.3 AS/400 and NetWare printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 8.3.1 Preparing for remote system printing . . . . . . . . . . . . . . . . . . . . . . . 182 8.3.2 Creating an output queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Chapter 9. Client Access/400 printing . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.1 Client Access/400 printing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.2 Client Access/400 Network Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 9.2.1 Configuring an AS/400 printer to Windows 95 . . . . . . . . . . . . . . . . 186 9.2.2 Network printer setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 9.2.3 AS/400 print profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 9.2.4 Considerations on Client Access/400 Network Printing . . . . . . . . . 193 9.3 Printing AS/400 output on a PC printer . . . . . . . . . . . . . . . . . . . . . . . . . 194 9.3.1 Configuring a printer emulation session . . . . . . . . . . . . . . . . . . . . . 194 9.3.2 Modifying and using a printer definition table (PDT) . . . . . . . . . . . . 200 Chapter 10. IBM AS/400 network printers . . . . . . . . . . . . . . . . . . . . . . . . 205 10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.2 Configuration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 10.2.1 Example 1: LAN-attached IPDS printer . . . . . . . . . . . . . . . . . . . . 206 10.2.2 Example 2: Dual-configuration printer . . . . . . . . . . . . . . . . . . . . . 207 10.2.3 Example 3: Shared dual-configuration printer . . . . . . . . . . . . . . . 207 10.2.4 Example 4: Shared multi-purpose printer . . . . . . . . . . . . . . . . . . . 208 10.3 Printer setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.3.1 Printer menu details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10.3.2 Recommended PTF levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 10.3.3 Microcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 10.3.4 Tray and bin selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 10.4 Attachment information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 10.4.1 Network Printer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 10.5 Output presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 10.5.1 IPDS, AFP=*YES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 10.5.2 IPDS, AFP=*NO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 10.5.3 SCS mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 10.5.4 Using the QPRTVALS data area . . . . . . . . . . . . . . . . . . . . . . . . . 217 10.5.5 Using the IPDS menu PAGE setting. . . . . . . . . . . . . . . . . . . . . . . 218 10.5.6 Edge-to-edge printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 vii Chapter 11. Configuring LAN-attached printers . . . . . . . . . . . . . . . . . . . .223 11.1 Configuring LAN-attached IPDS printers . . . . . . . . . . . . . . . . . . . . . . . .223 11.1.1 Configuring LAN-attached IPDS printers on V3R2. . . . . . . . . . . . .224 11.1.2 Configuring LAN-attached IPDS printers on V3R7 and later . . . . .230 11.1.3 TCP/IP BOOT service for V4R1 and later . . . . . . . . . . . . . . . . . . .237 11.2 Configuring LAN-attached ASCII printers . . . . . . . . . . . . . . . . . . . . . . .238 11.2.1 Configuring LAN-attached ASCII printers using LexLink . . . . . . . .238 11.2.2 Configuring LAN-attached ASCII printers using PJL drivers. . . . . .241 11.2.3 Configuring LAN-attached ASCII printers using SNMP drivers. . . .246 Chapter 12. Problem determination techniques . . . . . . . . . . . . . . . . . . . .253 12.1 Communication, connection, and configuration problems . . . . . . . . . . .253 12.1.1 Setting up a TCP/IP network on the AS/400 system . . . . . . . . . . .253 12.1.2 SSAP values in the line description . . . . . . . . . . . . . . . . . . . . . . . .253 12.1.3 Pinging the TCP/IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254 12.1.4 Port number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254 12.1.5 Print Job Language (PJL) support . . . . . . . . . . . . . . . . . . . . . . . . .255 12.1.6 Message PQT3603 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255 12.1.7 Configuring LAN-attached IPDS printers . . . . . . . . . . . . . . . . . . . .257 12.1.8 Configuring for remote system printing . . . . . . . . . . . . . . . . . . . . .258 12.1.9 Remote printer queue names . . . . . . . . . . . . . . . . . . . . . . . . . . . .258 12.2 Printer-writer-related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259 12.2.1 Print writer ends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259 12.2.2 Spooled files remain in RDY status . . . . . . . . . . . . . . . . . . . . . . . .260 12.2.3 Spooled file remains in PND status . . . . . . . . . . . . . . . . . . . . . . . .261 12.2.4 Ending the writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261 12.2.5 Spooled file status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262 12.2.6 Output queue status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 12.2.7 AFCCU printers: Minimize delay when stopping and starting. . . . .264 12.2.8 QSTRUP execution during IPL . . . . . . . . . . . . . . . . . . . . . . . . . . .264 12.3 Where your print output goes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265 12.4 Spooled file goes to hold status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .266 12.4.1 Writer cannot re-direct the spooled file . . . . . . . . . . . . . . . . . . . . .267 12.4.2 Message PQT3630 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268 12.4.3 Fidelity parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269 12.5 Copying spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269 12.6 Problem with output presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271 12.6.1 Physical page: Logical page . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271 12.6.2 Printer setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273 12.6.3 Computer Output Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273 12.6.4 A3 page support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274 12.7 Font problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274 12.7.1 Problems with shading at different resolutions. . . . . . . . . . . . . . . .276 12.8 Drawer and paper path selection problems . . . . . . . . . . . . . . . . . . . . . .276 12.8.1 IBM 4247 paper path selection . . . . . . . . . . . . . . . . . . . . . . . . . . .276 12.9 Printing on ASCII printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277 12.10 Additional information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278 Appendix A. PSF/400 performance factors . . . . . . . . . . . . . . . . . . . . . . . . . 279 A.1 AS/400 system storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 A.2 Data stream type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 A.2.1 IPDS pass through . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 A.2.2 Printer device description parameters. . . . . . . . . . . . . . . . . . . . . . . . . . 282 viii IBM AS/400 Printing V A.3 AFP resource retention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282 A.3.1 Clear memory for security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 A.4 Font types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 A.4.1 Using GDDM fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 A.5 Library list searches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284 A.6 Creating efficient AFP resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284 A.7 Other factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 A.7.1 PSF configuration object parameters. . . . . . . . . . . . . . . . . . . . . . . . . . .285 A.7.2 Printer file parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 A.7.3 Printer settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 Appendix B. Data Description Specifications (DDS) formatting . . . . . . . .287 B.1 DDS functionality example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287 B.2 Super Sun Seeds invoicing example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .292 Appendix C. Print openness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .303 C.1 Additional functions provided on the printer file . . . . . . . . . . . . . . . . . . . . . . .304 C.2 Additional functions provided on the PRTDEVD commands . . . . . . . . . . . . .304 C.3 Additional functions provided on the output queue commands . . . . . . . . . . .305 C.4 Additional functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306 C.5 Print openness: New APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306 Appendix D. Network Station printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309 D.1 Printing from OS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309 D.1.1 AS/400 Network Station printer driver . . . . . . . . . . . . . . . . . . . . . . . . . .309 D.1.2 Creating printer device descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . .309 D.2 Local printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 D.2.1 5250 screen copy to a local printer . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 D.2.2 Printing from Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 Appendix E. Printer summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313 Appendix F. PSF/400 performance results . . . . . . . . . . . . . . . . . . . . . . . . . .317 F.1 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317 F.1.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317 F.1.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .318 F.2 Methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .318 F.3 Performance cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319 F.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322 F.4.1 PSF/400 V4R2 with Network Printer 24 . . . . . . . . . . . . . . . . . . . . . . . . .322 F.4.2 PSF/400 V4R2 with IP60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .323 F.4.3 PSF/400 V4R2 with IP4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325 F.4.4 Comparison: Printing rates using PSF/400 V4R2 on Model 510/2144 .326 F.4.5 Comparison of processor requirements . . . . . . . . . . . . . . . . . . . . . . . . .328 F.4.6 Predictions of processor utilizations at printing speeds . . . . . . . . . . . . .329 F.4.7 Print While Convert (PWC)=Yes compared to PWC=NO . . . . . . . . . . .331 F.5 Application of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332 F.6 Sample output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333 Appendix G. Advanced Print Utility implementation case study. . . . . . . .343 G.1 Ordering printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343 G.1.1 Low-end printer: IBM Network Printer 12 . . . . . . . . . . . . . . . . . . . . . . .343 G.1.2 Departmental printer: IBM Infoprint 21 . . . . . . . . . . . . . . . . . . . . . . . . .343 G.1.3 AS/400 production printer and PC LAN departmental printer . . . . . . . .344 ix G.2 Ordering and obtaining software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 G.2.1 Checking whether the software is already installed . . . . . . . . . . . . . . . 345 G.3 Installing the software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 G.3.1 PSF/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 G.3.2 AFP Utilities/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 G.3.3 AFP Font Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 G.3.4 Advanced Print Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 G.3.5 Additional steps that may be required . . . . . . . . . . . . . . . . . . . . . . . . . 350 G.4 Designing electronic documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 G.4.1 Which fonts to use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 G.5 Creating the resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 G.6 Building and testing APU print definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . 354 G.6.1 Other common problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 G.6.2 Viewing APU output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 G.7 Automatically starting the APU Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 G.7.1 Creating a separate APU subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . 358 G.7.2 Modifying QBATCH to allow multiple jobs to run . . . . . . . . . . . . . . . . . 360 G.8 Using APU for production printing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 G.8.1 Using APU Monitor Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 G.9 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 G.9.1 Documenting APU component names . . . . . . . . . . . . . . . . . . . . . . . . . 365 G.9.2 Where APU print components are stored. . . . . . . . . . . . . . . . . . . . . . . 366 Appendix H. AS/400 to AIX printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 H.1 TCP/IP versus SNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 H.1.1 Sending spooled files using TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 H.1.2 PSF Direct. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 H.2 AS/400 spooled file data streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 H.2.1 *SCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 H.2.2 OV/400 and Final Form Text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 H.2.3 *AFPDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 H.2.4 *IPDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 H.2.5 *LINE or *AFPDSLINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 H.2.6 *USERASCII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 H.3 Automating the process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 H.3.1 Default Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 H.3.2 Destination options in the remote output queue . . . . . . . . . . . . . . . . . . 377 H.3.3 Output queue monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 H.4 Special considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 H.4.1 Processing line AS/400 SCS files as ‘flat ASCII’ . . . . . . . . . . . . . . . . . 378 H.4.2 Sample page and form definition for STD132. . . . . . . . . . . . . . . . . . . . 379 H.4.3 Parmdd file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 H.4.4 Destination Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 H.4.5 Output from the AS/400 query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 H.4.6 Transferring resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 H.4.7 Large spooled files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 H.5 Case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 H.5.1 One printer, all AFPDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 H.5.2 One printer, four document types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 H.5.3 70 printers, 12 applications, SCS spooled files. . . . . . . . . . . . . . . . . . . 384 H.5.4 Multiple printers, many data streams . . . . . . . . . . . . . . . . . . . . . . . . . . 384 H.6 Sending AS/400 spooled files to OnDemand for UNIX. . . . . . . . . . . . . . . . . 385 H.6.1 AS/400 side tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 x IBM AS/400 Printing V H.6.2 AIX side tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385 H.7 AS/400 printing to an Infoprint Manager for Windows NT or 2000 server . . .385 H.7.1 Hypothetical case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386 H.8 Additional references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387 Appendix I. Infoprint 2000 printing considerations . . . . . . . . . . . . . . . . . . .389 I.1 Print file considerations and HPT formatting . . . . . . . . . . . . . . . . . . . . . . . . . .389 I.2 Infoprint Manager and other solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390 I.2.1 Another application solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .392 I.2.2 Operator considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393 Appendix J. Printing enhancements in recent OS/400 releases . . . . . . . .395 J.1 Version 4 Release 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395 J.1.1 SNMP ASCII printer driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395 J.1.2 SNMP driver for Infoprint 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395 J.1.3 PSF/400 printer ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396 J.1.4 AFP Font Collection bundled with PSF/400 . . . . . . . . . . . . . . . . . . . . . .396 J.1.5 Type Transformer for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396 J.1.6 AFP/IPDS support for OneWorld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396 J.2 Version 4 Release 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396 J.2.1 Simplex/duplex mode switching DDS. . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.2 Force new sheet DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.3 Output bin DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.4 Insert DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.5 Z-fold DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.6 Overlay rotation DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.7 Constant back overlay in the printer file . . . . . . . . . . . . . . . . . . . . . . . . .397 J.2.8 Print finishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398 J.2.9 AS/400 font management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398 J.2.10 Advanced Function Printing Utilities (AFPU) enhancements . . . . . . . .398 J.2.11 Content Manager OnDemand for AS/400 . . . . . . . . . . . . . . . . . . . . . .398 J.3 OS/400 Version 4 Release 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398 J.3.1 Integration of AFP Workbench into Client Access/400. . . . . . . . . . . . . .399 J.3.2 Indexing keyword in DDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .399 J.3.3 Support for line data enhanced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .399 J.3.4 Automatic resolution enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . .399 J.3.5 Font performance improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400 J.3.6 Sizing and rotating page segments . . . . . . . . . . . . . . . . . . . . . . . . . . . .400 J.3.7 Enhanced PostScript transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400 J.3.8 IPDS pass through . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400 J.3.9 AFP Font Collection with Euro, expanded languages . . . . . . . . . . . . . .400 J.3.10 AFP PrintSuite for AS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401 J.4 OS/400 Version 4 Release 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401 J.4.1 OS/400 Image Print Transform Services . . . . . . . . . . . . . . . . . . . . . . . .401 J.4.2 Support for outline fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402 J.4.3 Font capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402 J.4.4 Cut-sheet emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402 J.4.5 Finishing support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403 J.4.6 TCP/IP configuration enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . .403 J.4.7 Font substition messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403 J.4.8 AFP Utilities for V4R2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403 Appendix K. Using the additional material . . . . . . . . . . . . . . . . . . . . . . . . . .405 K.1 Locating the additional material on the Internet . . . . . . . . . . . . . . . . . . . . . . .405 xi K.2 Using the Web material. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 K.2.1 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Appendix L. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Appendix M. Related publications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 M.1 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 M.2 IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 M.3 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 M.4 Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .415 IBM Redbooks fax order form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417 IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425 xii IBM AS/400 Printing V © Copyright IBM Corp. 2000 xiii Preface This IBM Redbook describes how to use printing functions on the AS/400 system. It supplements the standard reference documents on AS/400 printing by providing more specific “how to” information, such as diagrams, programming samples, and working examples. It addresses the printing function found in OS/400, Print Services Facility/400 (PSF/400), Advanced Print Utility, Page Printer Formatting Aid, AFP Font Collection, and other print-enabling software. The original edition applied to Version 3 Release 2 for CISC systems and Version 4 Release 2 for RISC systems. This second edition includes information about the new functions that are available in releases up to and including Version 4 Release 5. This document is intended for customers, business partners, and IBM systems specialists who need to understand the fundamentals of printing on the AS/400 system. It is designed to help you develop or advise others concerning the design and development of AS/400 printing applications. This document is not intended to replace existing AS/400 printing publications, but rather to expand on them by providing detailed information and examples. The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization Rochester Center. Alain Badan is an Advisory IT Specialist in Switzerland. His areas of expertise include AS/400 printing and AS/400 Facsimile Support/400. Alain has written other redbooks on AS/400 Printing and Facsimile Support/400. Simon Hodkin is a Senior IT Specialist in the U.K. Printing Systems Business. He has worked at IBM for 12 years. He has devised and run classes on printer connectivity and AFP. During the last three years, Simon has designed and implemented AS/400 printing solutions for major U.K. customers. Jacques Hofstetter is a Systems Engineer in Switzerland. He has 10 years of experience in AS/400 printing, and has worked at IBM for 15 years. His areas of expertise include Advanced Function Presentation and AS/400 printing. Gerhard Kutschera is a Systems Engineer Specialist in Austria. He has 11 years of experience with the AS/400 system, and has worked at IBM for 21 years. His areas of expertise include printing on the AS/400 system and AFP printing on RS/6000. Gerhard has also written another redbook on OfficeVision/400 printing. Whit Smith is an Education Specialist in the U.S. He has worked at IBM for eight years, after several years as an IBM customer. He holds a degree in Computer Science from the University of Texas. His areas of expertise include Communications, Application Development, and System Management. The October 2000 revision of the IBM AS/400 Printing V redbook was a result of the contributions of: xiv IBM AS/400 Printing V Mike McDonald Bill Shaffer IBM Boulder Roger Drolet Mira Shnier IBM Canada Simon Hodkin IBM United Kingdom Thanks to the following people for their invaluable contributions to the first edition of this redbook: Nick Hutt ITSO Rochester Russ Dickson Ken Dittrich Karl Hanson Dave Murray Ted Tiemens Kevin Vette IBM Rochester Tim Aden Jack Klarfeld Bruce Lahman Robert Muir Brian Pendleton Dale Pirie Bill Shaffer Bob Stutzman Nancy Wood IBM Boulder Eddy Gauthier IBM Belgium Mira Shnier IBM Canada Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: • Fax the evaluation form found in “IBM Redbooks review” on page 425 to the fax number shown on the form. • Use the online evaluation form found at ibm.com/redbooks • Send your comments in an Internet note to redbook@us.ibm.com © Copyright IBM Corp. 2000 1 Chapter 1. Printing on the AS/400 system We can define and view printing in a simplified manner: something to print, a program to pass the information to a printer, and a printer (and some paper). The same sentence translated into AS/400 printing terminology results in: An application creates a spooled file; the data is from the application and the spooled file attributes (page size, number of copies, default font, and so on) are from the printer file associated with the application. The spooled file is placed into an output queue; a print writer program then passes the spooled file to the printer to print it. The print writer also takes information from the printer device description. Figure 1 shows the basic AS/400 printing elements. Figure 1. Basic AS/400 printing elements The objectives of this chapter are to explain how printing works and to show all the printing possibilities with AS/400 systems. 1.1 Output queues: Spooled files The spooled files stored in output queues can have different origins and different formats (data streams) (Figure 2 on page 2), for example: • Spooled files can be created on the AS/400 system by an application, by OfficeVision/400, or just by a print screen. • With Client Access/400, the network printing function (previously named virtual printing) can direct PC output to an AS/400 output queue. • You may also receive spooled files from host systems (IBM S/390), RISC systems (IBM RS/6000), or OEM systems. Application Output Queue Print Writer Printer Printer Device Description Printer File 2 IBM AS/400 Printing V Figure 2. AS/400 spooled files On the AS/400 system, many commands are available for controlling printing activities. Some of the commands are: WRKSPLF The Work with Spooled Files display shows all (or a specific portion) of the spooled files that are currently on the system. The display includes information such as file and user names, device or queue names, status, and total pages. From this display, options are available to send, view, and change the attributes and hold, delete, display, and release the spooled files. Function keys are also available to change the assistance level, select another view, or to display all the printers configured to the system with the status of their associated print writers. WRKOUTQ The Work with Output Queue display shows all the files on the specified queue. The display includes information such as file and user names, status, total pages, and number of copies. From this display, you can select an option to send, view, and change the attributes as well as hold, delete, display, and release the spooled files. Function keys are also available to change the assistance level, select another view, display information on the writer associated with the output queue, or display all the printers configured to the system with the status of their associated print writers. WRKSPLFA The Work with Spooled File Attributes command shows the current attributes of the specified spooled file. It is possible to obtain the same display by selecting option 8 (Attributes) from the Work with Spooled Files or Work with Output Queue display. The spooled file attributes are information concerning a spooled file such as status, output queue, printer device type, page size, font, rotation, character identifier, and number of copies. CHSPLFA The Change Spooled File Attributes command allows you to change the attributes of a spooled file while it is on an output AS/400 Applications Office Vision/400 Remote Systems AS/400, S/390 CA/400 Network Printing Ouput Queue Spooled File Spooled File Chapter 1. Printing on the AS/400 system 3 queue. The same display is received by selecting option 2 (Change) in the Work with Spooled Files or Work with Output Queue display. Depending on the spooled file printer device type (or data stream), you may be able to change some of the attributes. For example, you can change the overlay if the printer file has a device type *SCS, but you cannot if it is *AFPDS. This is because the overlay is referenced in the spooled file data and not as an attribute for *AFPDS. STRPRTWTR The Start Print Writer command starts a spooling writer to the specified printer. This command specifies the name of the printer, the names of the output queue and message queue used, and the name of the writer. ENDWTR The End Writer command ends the specified spooling writer and makes this associated output device available to the system. The writer can be ended immediately or in a controlled manner. WRKCFGSTS The Work with Configuration Status command is used to display and to work with configuration status functions. A command parameter allows you to specify the type of description for which you want the status to be shown. For example, for printer descriptions, select *DEV (devices), and also specify the configuration description name, a generic name, or *ALL. Options on the Work with Configuration Status display allow you to vary on or off the device and display or change the device description. For detailed information on these commands and on printer files, see AS/400 Printer Device Programming. Refer to M.3, “Other resources” on page 411, for the form number based on the version and release level of the OS/400. 1.2 Data streams supported on the AS/400 system The printed output is the result of the interaction between the printer itself and the controlling software. Because there are different requirements for print output and different types of printers (line mode, page mode), there is also different software (data streams) (Figure 3 on page 4). 4 IBM AS/400 Printing V Figure 3. Data stream The AS/400 system supports different data streams and can automatically create the majority of them. The Printer device type parameter (Figure 4) in the printer file determines the type of data stream to be created. Figure 4. Create Printer File: Printer device type parameter The Printer device type parameter can be set to one of the following values: • *SCS (SNA Character String): Used to control line mode printers and has a relatively simple structure. The Data Description Specifications (DDS) FONT keyword is not supported. The font specified in the printer file or the printer default font is used. An extension of SCS, FFT-DCA (Final-Form Text Document Architecture) is used within the AS/400 Office environment. • *IPDS (Intelligent Printer Data Stream): A host-to-printer data stream used for AFP subsystems. It provides an attachment-independent interface for controlling and managing Ouput Queue Spooled File Printer File Print Writer Printer AS/400 Applications Data Stream Data Stream Create Printer File (CRTPRTF) Type choices, press Enter. File . . . . . . . . . . . . . . > MYPRTF Name Library . . . . . . . . . . . > MYLIB Name, *CURLIB Source file . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Source member . . . . . . . . . *FILE Name, *FILE Generation severity level . . . 20 0-30 Flagging severity level . . . . 0 0-30 Device: Printer . . . . . . . . . . . *JOB Name, *JOB, *SYSVAL Printer device type . . . . . . *SCS *SCS, *IPDS, *LINE... Text 'description' . . . . . . . *SRCMBRTXT Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 1. Printing on the AS/400 system 5 all-point-addressable (APA) printers. It supports interactive, two-way dialog between the print driver and the printer (printer information, cooperative recovery, and resources management). Note: The AS/400 generated IPDS is a subset of the full IPDS. For detailed information, see 1.3, “Printer writer” on page 6. • *AFPDS (Advanced Function Printing Data Stream): A data stream for advanced function printers (independent of operating systems, independent of page printers, and portable across environments). AFPDS is a structured data stream divided into components called objects. AFPDS includes text, images, graphics, and barcodes and references AFP resources (for example, overlays, page segments, and fonts). • *LINE (Line data stream): A LINE data stream referencing a page definition and a form definition with the spooled file. The printer file device type parameter was enhanced in V3R2 and V3R7 (and later) with a new value *LINE. • *AFPDSLINE: AFPDS line (also called Mixed) data stream: AFPDSLINE data stream is a mixture of AFP structured fields and LINE data. Only certain AFP structured fields can be mixed with the line data. Programmers must specify AFP structured fields in applications. The printer file device type parameter was enhanced in V3R2 and V3R7 (and later) with a new value *AFPDSLINE. • *USERASCII: ASCII data stream: There is no formal structure controlling the use of the American National Standard Code for Information Interchange (ASCII) data stream to control printers attached to systems providing ASCII support. There is no architectural data stream standard to which ASCII printers can conform in the interest of uniformity. To create a spooled file in *USERASCII on the AS/400 system, programmers must specify ASCII escape sequences in applications using the transparency mode. We do not recommend this approach because the escape sequences required in the application depend on the type of printer. A *USERASCII spooled file can contain any form of ASCII printer data stream (for example, PCL5, PPDS, or PostScript). Spooled files can also be received from other systems: • From another AS/400 system, you can receive spooled files in SCS, IPDS, LINE, AFPDSLINE, AFPDS, or USERASCII data streams. • If the spooled file is from a System/390, LINE, AFPDSLINE, and AFPDS are supported. By using object distribution (SNADS), the spooled file is placed directly in an AS/400 output queue. • From a PC running Client Access/400 network printing, you can receive spooled files in SCS, AFPDS, or USERASCII. • From a RISC system (RS/6000), you may receive spooled files in AFPDS or USERASCII. • From an Other Equipment Manufacturer (OEM) system, spooled files are normally received in USERASCII. 6 IBM AS/400 Printing V A spooled file stored in an AS/400 output queue can be in different data streams. On the other end, many printers support only one data stream (for example SCS, IPDS, or ASCII PCL5). Some others (for example, the IBM Infoprint 20, 21, 32, and 40) support IPDS, PCL, and Postscript. Figure 5 shows data streams and printer devices. Figure 5. Data streams and printer devices On the AS/400 system, the print writer can convert some of the data streams to others. The following section explains the possible conversions. 1.3 Printer writer The printer writer program is a system-supplied program. This program takes the spooled file from an output queue and sends it to a printer. The printer writer handles spooled files by using one of the following options: • Print Writer • Print Services Facility/400 (PSF/400) • Host print transform Each of these writer options supports different data streams and printer types. They can also perform certain data stream conversions. Figure 6 shows the three options with the supported input data streams, the resulting data streams, and the required printer types. AS/400 Applications Print Writer IPDS Printer AFP(*YES) ASCII Printer Spool S/390 CA/400 Network Printing LINE AFPDSLINE LINE AFPDS AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII IPDS Printer AFP(*NO) SCS Printer SCS IPDS IPDS ASCII Chapter 1. Printing on the AS/400 system 7 Figure 6. Printer writer and data streams The IPDS data stream generated by the AS/400 system (when the printer file device type parameter is set to *IPDS) is not the full IPDS data stream. Many functions are not included in this subset, including the use of external resources such as fonts or page segments. The IPDS data stream generated by Print Services Facility/400 (PSF/400) includes the full IPDS set of commands and supports a two-way dialog between PSF/400 and the printer (Figure 7). Figure 7. AS/400 generated IPDS: Full IPDS IPDS Printer AFP(*YES) ASCII Printer Spool AS/400 Applications S/390 CA/400 Network Printing LINE AFPDSLINE LINE AFPDS AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII IPDS Printer AFP(*NO) SCS Printer IPDS IPDS ASCII Emulator ASCII Printer Print Writer Print Writer ASCII Print Services Facility/400 Host Print Transform SCS AFPDS USERASCII SCS ASCII LINE AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII SCS Spool IPDS Printer AFP(*YES) Spool AS/400 Applications LINE AFPDSLINE SCS IPDS AFPDS IPDS Printer AFP(*NO) IPDS IPDS Print Writer Print Writer Print Services Facility/400 LINE AFPDSLINE SCS IPDS AFPDS Spool SCS IPDS 8 IBM AS/400 Printing V The AS/400-generated IPDS is supported by the print writer or transformed to full IPDS by PSF/400. AS/400-generated IPDS cannot be transformed to an ASCII data stream and can only be sent to another AS/400 system. For more information, see 1.6.2, “IPDS spooled files” on page 24. Because of these restrictions, we recommend using device type *AFPDS in place of *IPDS in the printer file to allow portability, more conversion possibilities, and full IPDS support. 1.3.1 Print writer The print writer (Figure 8) is used when the target printers are SCS, IPDS configured with the Advanced Function Printing (AFP) parameter set to *NO, or ASCII using an emulator. Figure 8. Print writer When printing using the print writer, you have to consider these points: • If the spooled file data stream is SCS and the target printer is an IPDS AFP(*NO) printer, the data stream is transformed by the print writer into IPDS. • If the spooled file data stream is IPDS, AFPDS, or AFPDSLINE and the target printer is SCS or ASCII using an emulator, an error message is returned. • If the spooled file data stream is AFPDS or AFPDSLINE and the target printer is IPDS AFP(*NO), an error message is returned. • If the spooled file data stream is LINE and refers to a PAGDFN (page definition) and the target printer is SCS or IPDS AFP(*NO), an error message is returned. • If the spooled file data stream is LINE and refers to FORMDF (form definition) but no PAGDFN (page definition) and the target printer is SCS or IPDS AFP(*NO), the spooled file will print, but the FORMDF parameter is ignored. ASCII Printer Spool AS/400 Applications CA/400 Network Printing LINE AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII IPDS Printer AFP(*NO) SCS Printer IPDS ASCII Emulator Print Writer Print Writer SCS USERASCII SCS AFPDS USERASCII SCS Spool Chapter 1. Printing on the AS/400 system 9 • If the spooled file data stream is USERASCII, the target printer must be an ASCII printer using an emulator. • If the target printer is an ASCII printer using an emulator, only SCS and USERASCII spooled files are supported. Note: The USERASCII spooled files must be in an ASCII printer data stream supported by the target printer (for example, PCL5, PPDS, or PostScript). • There is no support for overlays, page segments, or downloaded fonts. • Barcodes are supported only on IPDS printers (even configured AFP(*NO)). • An image can only be printed from OfficeVision/400 and the target printer must be IPDS (even configured AFP(*NO)). 1.3.2 Print Services Facility/400 Implementation of the AFP print subsystem was added to OS/400 in V1R2 (1989) as an integrated component of the operating system. OS/400 Version 2 was enhanced in subsequent releases to provide AFP print subsystem support similar to that in S/390. From OS/400 Version 2, there are two separate printing subsystems in the operating system. OS/400 native print support (print writer) continues to support line printers and a subset of IBM IPDS printers and print functions. Full support for all IPDS printers is provided by the integrated AFP printing subsystem. Which printing subsystem is used to process application output is determined by the device description of the target printer. Only printers defined as IPDS AFP=*YES are controlled by the AFP printing subsystem. Beginning with OS/400 V3R1, the AFP printing subsystem is a separately orderable feature of OS/400 called Print Services Facility/400. This feature is licensed according to the speed in impressions per minute (IPM) of the fastest AFP printer used on the system. The number of AFP printers on the system is not relevant, only the speed of the fastest printer. There is also a separate feature for Facsimile Support/400. The four PSF/400 features are: • PSF/400 Facsimile Support Only • PSF/400 1-28 IPM Printer Support • PSF/400 1-45 IPM Printer Support • PSF/400 Anyspeed Printer Support 1.3.2.1 When PSF/400 is required Print Services Facility/400 is required when the AS/400 system must support AFP page functionality or IPDS print management. In simple terms, this is whenever the device type in the printer description is *AFPDS. *AFPDS must be specified in the printer device description in the following situations: • Any time you are printing to a LAN-attached IPDS printer • Any time you are printing to an Advanced Function Common Control Unit (AFCCU) printer • Any time you require AFP resource management, for example download and management of fonts, images, overlays, and graphic resources • Printing to IPDS or ASCII printers attached to Print Services Facility/2 • Printing any AFPDS or line data spooled file to an IPDS printer • Using Facsimile Support/400 to send faxes 10 IBM AS/400 Printing V Note: PSF/400 is not required when using the IBM 7852-400 modem as a fax controller. Examples of AFCCU printers include: • IBM 3130 Advanced Function Printer • IBM 3160 Advanced Function Printer • IBM Infoprint 60 Advanced Function Printer • IBM Infoprint 62 Advanced Function Printer • IBM Infoprint 3000 Advanced Printing System • IBM Infoprint 4000 Advanced Printing System • Older IBM AFCCU printers, such as the 3820, 3825, 3827, 3828, 3835, 3900, and 3935 The following IPDS printers can be supported without PSF/400 (but PSF/400 may be desirable): • IBM 4230 Impact Matrix Printer • IBM 4247 Multiform Impact Printer • IBM 6400 Line Matrix Printer • IBM Network Printer (4312) • IBM Network Printer 17 (4317) • IBM Infoprint 20 (4320) • IBM Infoprint 21 (4321) • IBM Network Printer 24 (4324) • IBM Infoprint 32 (4332) • IBM Infoprint 40 (4332) • Older IBM AS/400 laser printers, such as the 4028, 3112, 3116, 3912, 3916, and 3930 printers Note: If any of the printers listed here are LAN-attached or require AFP functionality (for example: resource management), PSF/400 changes from optional to required. 1.3.2.2 The Print Services Facility/400 process PSF/400 provides data stream transforms and AFP print resource management to ensure that applications and their AFP resources print consistently on all printers managed by PSF/400. PSF/400 can transform and print the following data streams on the AS/400 system: • AFPDS • SCS • IPDS • LINE • AFPDSLINE Note: In V4R2 with the image print function, Tag Image File Format (TIFF), Graphics Interchange Format (GIF), OS/2 and Windows Bitmap (BMP), and PostScript level 1 data streams can also be transformed to be printed on IPDS printers. For an overview on the image print transform, see 1.3.4, “Image print transform” on page 14. For detailed information, see Chapter 7, “Image print transform” on page 161. The Print Services Facility/400 process is shown in Figure 9. Chapter 1. Printing on the AS/400 system 11 Figure 9. Print Services Facility/400 process PSF/400 combines application output with print resources such as electronic forms, fonts, page segments, and formatting definitions that are either included inline with the print output or in the AS/400 system libraries. PSF/400 then creates IPDS output for the target IPDS printer configured AFP(*YES). PSF/400 includes two tasks: the print writer task and the print driver task. The print writer is responsible for the data stream conversion, and the print driver task manages the AFP resources and passes the data to the printer. Printer files and data description specifications are the user and application program interfaces for print formatting on the AS/400 system, and are included with the operating system. Access to some AFP capabilities, such as electronic forms (overlays), downloading fonts to a printer from host font libraries (including image page segments in a document), and others, have been incorporated into these familiar AS/400 print interfaces for users and application programs. For more information on Advanced Function Presentation (AFP), see Chapter 2, “Advanced Function Presentation” on page 35. To enhance an existing application producing output in SCS data stream to AFP, see Chapter 3, “Enhancing your output” on page 67. 1.3.2.3 Is PSF/400 installed To check if the Print Services Facility is installed on your system, type GO LICPGM on any command line. The display shown in Figure 10 on page 12 appears. Print Request Queue Spool LINE AFPDSLINE SCS IPDS AFPDS Print Writer Data Stream Converter IPDS IPDS Print Driver AFP Resources Print Writer Task Print Writer Task IPDS Printer AFP(*YES) 12 IBM AS/400 Printing V Figure 10. Work with Licensed Programs Select option 10 (Display installed licensed program), and press the Enter key. The display shown in Figure 11 appears. Figure 11. Display Installed Licensed Programs To see the entry for the Print Services Facility, you may have to page down (press the Page Down key). Note: For V3R1 and V3R2, the licensed program number is 5763-SS1; for V3R6 and V3R7, the licensed program number is 5716-SS1; and for V4R1 and V4R2, the licensed program number is 5769-SS1. If the Print Services Facility feature is not present, you must install it. If you have not purchased the PSF/400 feature, contact your IBM representative. LICPGM Work with Licensed Programs System: SYS00005 Select one of the following: Manual Install 1. Install all Preparation 5. Prepare for install Licensed Programs 10. Display installed licensed programs 11. Install licensed programs 12. Delete licensed programs 13. Save licensed programs Selection or command ===> 10 F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant F16=AS/400 Main menu (C) COPYRIGHT IBM CORP. 1980, 1998. Display Installed Licensed Programs System: SYS00005 Licensed Installed Program Status Description 5769SS1 *COMPATIBLE OS/400 - Library QGPL 5769SS1 *COMPATIBLE OS/400 - Library QUSRSYS 5769SS1 *COMPATIBLE Operating System/400 5769SS1 *COMPATIBLE OS/400 - Extended Base Support 5769SS1 *COMPATIBLE OS/400 - Online Information ....... ........... ...... . ....................... 5769SS1 *COMPATIBLE OS/400 - AFP Compatibility Fonts 5769SS1 *COMPATIBLE OS/400 - *PRV CL Compiler Support 5769SS1 *COMPATIBLE OS/400 - Common Programming APIs Toolkit 5769SS1 *COMPATIBLE OS/400 - Print Services Facility 5769SS1 *COMPATIBLE OS/400 - Media and Storage Extensions 5769SS1 *COMPATIBLE OS/400 - SOMobjects 5769SS1 *COMPATIBLE OS/400 - Advanced 36 5769SS1 *COMPATIBLE OS/400 - Locale Source Library More... Chapter 1. Printing on the AS/400 system 13 Note: Beginning with OS/400 V4R4, license management of PSF/400 (as with all major OS/400 software) is via license keys. The stacked CD shipped with the release includes PSF/400. PSF/400 can be installed for a trial period of up to 70 days. This trial period begins when you start the first print writer defined as AFP(*YES). At the end of the 70-day period, PSF/400 will stop functioning (unless the license key has been installed). 1.3.3 Host print transform The host print transform function allows SCS-to-ASCII and AFPDS-to-ASCII conversion to take place on the AS/400 system instead of by the emulators. SCS or AFPDS spooled files converted to ASCII data stream can be directed to ASCII printers. Note: In V4R2 with the image print function, Tag Image File Format (TIFF), Graphics Interchange Format (GIF), OS/2 and Windows Bitmap (BMP), and PostScript level 1 data streams can also be transformed to be printed on ASCII printers. For an overview of image print transform, see 1.3.4, “Image print transform” on page 14. For detailed information, see Chapter 7, “Image print transform” on page 161. Host print transform converts the SCS data stream or the AFPDS data stream just before it is sent to the ASCII printer. The spooled file contains SCS data or AFPDS data and not the converted ASCII data. AFP resources (such as character sets, overlays, and page segments) referenced in AFPDS spooled files are converted into ASCII data streams and passed to the ASCII printer. Figure 12 shows the host print transform process. Figure 12. Host print transform process ASCII printers support several different compositions of ASCII data streams. The host print transform function generates an ASCII printer data stream for a number Application Manufacturer Type and Model ASCII Printer ASCII Printer File Host Print Transform SCS or AFPDS Spool AFP Resources DEVTYPE *SCS or AFPDS 14 IBM AS/400 Printing V of IBM and non-IBM printers. To generate the different ASCII data streams, the host print transform function uses AS/400 system objects that describe characteristics of a particular printer. These objects are called Work Station Customizing Objects (WSCST), and it is possible to customize them. For more information on host print transform, see Chapter 6, “Host print transform” on page 137. 1.3.4 Image print transform Image print transform is an OS/400 function (Figure 13) included in Version 4.0 Release 2.0 that is capable of converting image or PostScript data streams into AFPDS and ASCII printer data streams. The conversion take place on the AS/400 system, which means the data stream is independent of any printer emulators or hardware connections. Figure 13. Image print transform function Depending on the image configuration parameter in the printer device description and the spooled file data stream, Print Services Facility/400 or host print transform passes the spooled file to the image transform function. The image print transform function converts image or print data from one format into another. The image print transform function can convert the following data streams: • Tag Image File Format (TIFF) • Graphics Interchange Format (GIF) • OS/2 and Windows Bitmap (BMP) • PostScript Level 1 CA/400 Network Printing IBM Network Station Spool PostScript(PS) TIFF GIF BMP Image Print Transform Print Services Facility/400 Host Print Transform PS PCL IPDS AFPDS IPDS Printer AFP(*YES) ASCII Printer PostScript(PS) TIFF GIF BMP PS PCL AFPDS Chapter 1. Printing on the AS/400 system 15 The image print transform function can generate the following data streams: • Advanced Function Print Data Stream (AFPDS) • Hewlett-Packard Printer Control Language (PCL) • PostScript Level 1 For detailed information on image print transform, see Chapter 7, “Image print transform” on page 161. 1.4 AS/400 printer attachment methods This topic shows the different printer attachment methods on the AS/400 system depending on the type of printer, and gives information on the type of writer needed (print writer, PSF/400, or host print transform). The following attachment methods are discussed: • Printers attached to a workstation controller or to an IBM 5x94 (Remote Control Unit) • IPDS printers LAN attached • ASCII printers attached to displays • ASCII printers attached to PCs • ASCII printers LAN attached • Printers attached using PSF Direct • Printers attached using PSF/2 DFP (Distributed Print Function) Note: This topic only includes a discussion about printers directly attached and controlled by an AS/400 system, or in other words, printers for which there is a device description. All printers attached to remote systems or connected using a TCP/IP LPR/LPD attachment are discussed in 1.5, “Remote system printing” on page 22. For information on printing SCS, IPDS, AFPDS, or USERASCII spooled files on the different attachment methods, see 1.6, “Printing SCS, IPDS, AFPDS, and USERASCII spooled files” on page 23. For information on IBM printers, see Appendix E, “Printer summary” on page 313. 1.4.1 Printers attached to AS/400 workstation controllers or IBM 5x94 Several IBM printers (line (SCS) or IPDS) can be attached directly to AS/400 workstation controllers by twinax cable. The same printers can also be attached by twinax to a Remote Control Unit IBM 5x94 (Figure 14 on page 16). 16 IBM AS/400 Printing V Figure 14. Printers attached to workstation controller or IBM 5x94 Note these considerations: • Use the same functions if the printer is attached to a workstation controller or IBM 5x94. Note: IPDS printer are not fully supported on IBM 5294. • If any IPDS printer is configured with the parameter AFP set to *YES, PSF/400 is required on the system. • Some twinax attached IPDS printers must be configured AFP(*YES) (for example, an IBM 3130). 1.4.2 IPDS printers LAN-attached Any IPDS printers with an IBM AFCCU (Advanced Function Common Control Unit) can be networked-attached to an AS/400 (for example, IBM Infoprint 60, Infoprint 70, Infoprint 62, Infoprint 2000, Infoprint 3000, and Infoprint 4000). These printers support one or more of the following attachments: SNA Token-Ring, SDLC, TCP/IP Token-Ring, and TCP/IP Ethernet. IBM workgroup printers with the appropriate Network Interface Card (NIC) are supported. These printers include: • IBM Network Printer 12 (4312) • IBM Network Printer 17 (4317) • IBM Infoprint 20 (4320) • IBM Infoprint 21 (4321) • IBM Network Printer 24 (4324) • IBM Infoprint 32 (4332) • IBM Infoprint 40 (4332) For more information on IBM workgroup printers, see Chapter 10, “IBM AS/400 network printers” on page 205. Using the I-DATA 7913 Printer LAN Attachment box (TCP/IP Token-Ring or Ethernet), it is also possible to attach the following IBM IPDS printers on the LAN: IBM 3812, 3816, 3912, 3916, 3112, 3116, 4028, 4230, and 6400. The two-way dialog between the AS/400 system and the printer facilitated by IPDS enables the same general level of print functionality, print management, IPDS Printer AFP(*YES) IPDS Printer AFP(*NO) SCS Printer AS/400 WSC IPDS Printer AFP(*NO) IPDS Printer AFP(*YES) SCS Printer 5x94 Chapter 1. Printing on the AS/400 system 17 and error recoverability for LAN/WAN-attached IPDS printers as is found in direct-attached (twinax) IPDS printers. The capability of IPDS to “bridge” the network connection is especially important with TCP/IP attachment. Standard print support over TCP/IP (using LPR to LPD) is a one-way send of the spooled file, with limited support of print functions and no error recovery. Note: For detailed information on IPDS LAN-attached printers configuration, see 11.1, “Configuring LAN-attached IPDS printers” on page 223. Figure 15. IPDS printers LAN-attached Note these considerations: • Any IPDS printer LAN attached to an AS/400 system (Figure 15) must be configured with the AFP parameter set to *YES; PSF/400 is required on the system. • IPDS printers with an AFCCU and IBM network printers can be shared among different systems. The previous limit of three systems sharing an AFCCU TCP/IP-attached printer is removed by an enhancement provided by PTFs: on V3R2 (PTF SF42745), V3R7 (PTF SF42655), and V4R1 (PTF SF43250). This enhancement is part of the base code for V4R2. • The IPDS printers IBM 4224 and 4234 are not supported. 1.4.3 ASCII printers attached to displays The IBM InfoWindow displays 3477, 3486, 3487, 3488, and 3489 can be locally attached to the AS/400 system or remotely attached using an IBM 5x94 control unit through twinax cable. The InfoWindow displays have a printer port that can support the attachment of an ASCII printer (Figure 16 on page 18). AS/400 IPDS Printer AFP(*YES) IPDS Printer AFP(*YES) I-Data 7913 18 IBM AS/400 Printing V Figure 16. ASCII printers attached to displays Note these considerations: • Using display emulation, only SCS or USERASCII data streams are supported. • Using host print transform, SCS, AFPDS, and USERASCII data streams are supported. • USERASCII must be in the ASCII printer data stream of the target printer (for example, PCL5 or PPDS). • If host print transform is used with AFPDS spooled files, the ASCII printer must support one of the following data streams: PCL4 or 5 (HP Laser and InkJet printers, IBM 4039, IBM Network Printers) or PPDS levels 3 and 4 (IBM 4019, 4029). • PSF/400 is not required when printing AFPDS spooled files with host print transform. • IPDS spooled files are not supported by 5250 emulation or host print transform. 1.4.4 ASCII printers attached to PCs All ASCII printers can be connected to a PC using the standard parallel or serial port (Figure 17). PC5250 sessions are used to print AS/400 spooled files on the PC. When a spooled file is sent to a PC5250 printer session, it needs to be converted to an ASCII data stream supported by the target printer. There are three ways that this conversion occurs: • PC5250 transform based on a Printer Definition Table (PDT) • PC5250 transform based on the Windows 95/NT printer driver • Host print transform AS/400 WSC 5x94 ASCII Printer ASCII Printer InfoWindow Display InfoWindow Display Chapter 1. Printing on the AS/400 system 19 Figure 17. ASCII printers attached to personal computers Consider these points: • Using the PC5250 transform based on PDT, only SCS and USERASCII data streams are supported. PDT tables can be customized. • Using the PC5250 transform based on the Windows 95/NT printer driver, only SCS and USERASCII data streams are supported. No customization is possible. • Using host print transform, SCS, AFPDS, and USERASCII data streams are supported. Customization is possible. • USERASCII must be in the ASCII printer data stream of the target printer (for example, PCL5 or PPDS). • If host print transform is used with AFPDS spooled files, the ASCII printer must support one of the following data streams: PCL4 or 5 (HP LaserJet and InkJet printers, IBM 4039, IBM Network Printers) or PPDS levels 3 and 4 (IBM 4019, 4029). • PSF/400 is not required when printing AFPDS spooled files with host print transform. • IPDS spooled files are not supported by the PC5250 transform based on a PDT or on a Windows printer driver, and by host print transform. For detailed information, see Chapter 9, “Client Access/400 printing” on page 185. 1.4.5 ASCII printers LAN-attached ASCII printers may be attached on the network using Token-Ring or Ethernet connections (Figure 18 on page 20). For print writer support, there are three ASCII print drivers. • Line Printer Requester (LPR). These are also known as remote output queue. • PJL printer drivers. These drivers were released at OS/400 V3R7. The *IBMPJLDRV system driver supports HP printers. • SNMP printer driver. This driver was released at V4R5. It is available for the IBM Infoprint 21 printer at V4R3 and V4R4 (via a PTF). Note: The PJL and SNMP printer drivers are not available on CISC AS/400 systems (V3R2 and earlier). AS/400 PC DOS PC Windows PC OS/2 ASCII Printer ASCII Printer ASCII Printer 20 IBM AS/400 Printing V For more information on the configuration of ASCII LAN-attached printers, see 11.2, “Configuring LAN-attached ASCII printers” on page 238. Figure 18. ASCII printers LAN-attached Consider these points: • As host print transform is used, SCS, AFPDS, and USERASCII data streams are supported. • USERASCII must be in the ASCII printer data stream of the target printer (for example, PCL5 or PPDS). • If host print transform is used with AFPDS spooled files, the ASCII printer must support one of the following data streams: PCL4 or 5 (HP LaserJet and InkJet printers, IBM 4039, IBM Network Printers) or PPDS levels 3 and 4 (IBM 4019, 4029). • PSF/400 is not required when printing AFPDS spooled files with host print transform. • IPDS spooled files are not supported by host print transform. • If the new drivers are used, the printer must support Printer Job Language (PJL). PJL is not supported by all PCL ASCII printers (for example, not supported by IBM 4029 and HPIII). • ASCII printers LAN-attached can be shared between different systems (for example, an AS/400 system and a PC print server). • Using a LAN-attached ASCII printer removes the limitations of an ASCII printer connected using a TCP/IP LPR-LPD connection (for example, default page format and page range to print). Note: If your ASCII printer supports PJL and is actually connected with a remote output queue (TCP/IP LPR-LPD), we recommend that you connect it directly to the AS/400 system with the PJL drivers. 1.4.6 Printers attached to PSF Direct PSF Direct support is provided by Print Services Facility/2 (PSF/2) and Print Services Facility/6000 (PSF/6000) (Figure 19). PSF Direct for OS/2 allows a maximum of 16 printers simultaneously. With PSF Direct attached printers, the control of the print remains on the AS/400 system, which means PSF Direct notifies the AS/400 system with any message (print completed, error messages, and so on). AS/400 Marknet XLs ASCII Printer INA card ASCII Printer ASCII Printer IBM NP Lan ASCI Printer HP JetDirect Chapter 1. Printing on the AS/400 system 21 Figure 19. Printers attached to PSF Direct Note these considerations: • PSF/400 is required on the AS/400 system. • PSF Direct allows the use of printer resident fonts. • PSF Direct supports all the IBM IPDS laser printers, the IBM 4230, 6400 IPDS impact printers, and any PCL or PPDS compatible ASCII printers. If the target printer is an ASCII printer, PSF Direct converts the IPDS data stream (received from PSF/400) into an ASCII data stream (in fact, it creates an image). 1.4.7 Printers attached to PSF/2 DPF The PSF Distributed Print Function (DPF) is provided by Print Services Facility/2 (PSF/2). PSF/2 DPF allows up to 10 printers simultaneously. With PSF/2 DPF attached printers, print control is done by PSF/2 (Figure 20). The AS/400 system is not notified of any printer related messages (print completed, error messages, and so on). PSF/400 transfers the spooled files to a queue on the PSF/2 system. When this transfer is done successfully, the PSF/2 returns an acknowledgment to the AS/400 system, and the spooled file is removed from the AS/400 output queue. Then PSF/2 takes control of the spooled file until it is printed. Figure 20. Printers attached to PSF/2 DPF AS/400 ASCII Printer IPDS Printer AFP(*YES) PSF Direct AS/400 ASCII Printer IPDS Printer AFP(*YES) PSF/2 DPF 22 IBM AS/400 Printing V Note these considerations: • PSF/400 is required on the AS/400 system. • There is a time delay due to double spooling. PSF/2 does not start to print the spooled file until it has been completely received from the AS/400 system. This is particularly noticeable for large spooled files. • PSF DPF does not use printer resident fonts, only fonts downloaded from the AS/400 system. • PSF DPF supports all the IBM IPDS laser printers and any PCL or PPDS compatible ASCII printers. If the target printer is an ASCII printer, PSF DPF converts the IPDS data stream (received from PSF/400) into an ASCII data stream (in fact, creates an image). • IBM IPDS impact printers are not supported (for example, IBM 4230 and IBM 6400). 1.5 Remote system printing Remote system printing (Figure 21) is particularly useful for customers who have networked systems for automatically routing spooled files to printers connected to other systems. Output queue parameters define the target system. Depending on the target system or printer, host print transform can be called to convert the spooled file into an ASCII printer data stream. Figure 21. Remote system printing Note these considerations: • If the spooled file is *AFPDS, *LINE, or *AFPDSLINE, PSF/400 is only needed on the target system. • Host print transform is only supported if the connection type parameter is set to *IP, *IPX, or *USRDFN. AS/400 Output Queue AS/400 NetWare4 Other ASCII Printer ASCII Printer ASCII Printer ASCII Printer ASCII Printer NetWare3 SCS or IPDS Printer Printer IPDS Printer PSF/2 S/390 LINE or IPDS Printer Chapter 1. Printing on the AS/400 system 23 • If host print transform is used, SCS, AFPDS, and USERASCII data streams are supported. • USERASCII must be in the ASCII printer data stream of the target printer (for example, PCL5 or PPDS). TIFF, BMP, GIF, and PostScript level 1 are supported if using the image print transform function. For more information on remote system printing, see Chapter 8, “Remote system printing” on page 171. 1.6 Printing SCS, IPDS, AFPDS, and USERASCII spooled files This topic discusses printing SCS, IPDS, AFPDS, and USERASCII spooled files to printers attached to the AS/400 system or on the network by using remote system printing. Note: For detailed information on the attachment methods, see 1.4, “AS/400 printer attachment methods” on page 15. For printing on the network, see 1.5, “Remote system printing” on page 22. 1.6.1 SCS spooled files You can print SCS spooled files on: • SCS or IPDS printers directly attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller): If the target printer is an IPDS printer configured with AFP(*YES), PSF/400 is required on the system. If the spooled file refers to an overlay (in the printer file), the target printer must be an IPDS printer configured with AFP(*YES). In this case, PSF/400 is required on the system. If the target printer is SCS or IPDS AFP(*NO), the overlay parameter is ignored. • ASCII printers by using an emulator or host print transform: If the spooled file refers to an overlay, this parameter is ignored. • PSF Direct attached printers: PSF/400 is always required with PSF Direct attached printers. • PSF/2 DPF printers: PSF/400 is always required with PSF/2 DPF attached printers. Host resident fonts must also be available on the AS/400 system because PSF/2 DPF does not use printer resident fonts. • Network with destination type OS400 or OS400V2: If the spooled file refers to an overlay, this parameter is passed to the remote AS/400 system. In this case, PSF/400 is only needed on the remote system. The overlay must be available on the target system and found in the library list. • Network with destination type S390: The SCS spooled file is converted to a form of LINE data. 24 IBM AS/400 Printing V If the spooled file refers to an overlay, this parameter is not passed to the S/390. • Network with destination type PSF2: The SCS spooled file must be converted to ASCII since PSF/2 does not support SCS data stream. This can be done by specifying Host Print Transform(*YES) in the remote output queue definition. If the spooled file refers to an overlay, this parameter is not passed to PSF/2. • Network with destination type OTHER: The SCS spooled file must be converted to ASCII since we mainly address an ASCII printer with a TCP/IP line printer daemon (LPD) attachment. This can be done by specifying Host Print Transform(*YES) in the remote output queue definition. If the spooled file refers to an overlay, this parameter is not passed to the remote system. 1.6.2 IPDS spooled files You can print AS/400-generated IPDS spooled files on: • IPDS printers directly attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller): If the target printer is an IPDS printer configured with AFP(*YES), PSF/400 is required on the system. If the spooled file refers to an overlay (in the printer file), the target printer must be an IPDS printer configured with AFP(*YES). In this case, PSF/400 is required on the system. If the target printer is IPDS AFP(*NO), the overlay parameter is ignored. • PSF Direct attached printers: PSF/400 is always required with PSF Direct attached printers. • PSF/2 DPF attached printers: PSF/400 is always required with PSF/2 DPF attached printers. Host resident fonts must also be available on the AS/400 system because PSF/2 DPF does not use printer resident fonts. • Network with destination type OS400 or OS400V2: If the spooled file refers to an overlay, this parameter is passed to the remote AS/400 system. In this case, PSF/400 is only needed on the remote system. The overlay must be available on the target system and found in the library list. • Network with destination type S390: The IPDS spooled file is converted to a form of LINE data only if no special device requirements are present (see the spooled file attributes). If special device requirements are present (normally they are with an IPDS spooled file), the spooled file cannot be transferred to the S/390. If the spooled file refers to an overlay, this parameter is not passed to the S/390. The following types of printing are not supported: Chapter 1. Printing on the AS/400 system 25 • Printing on a ASCII printers using an emulator or host print transform • Printing on a network with destination type PSF2 • Printing on a network with destination type OTHER 1.6.3 AFPDS spooled files You can print AFPDS spooled files on: • IPDS AFP(YES) printers directly attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller): PSF/400 is required on the system. • ASCII printers by using host print transform: PSF/400 is not required on the system. • PSF Direct attached printers: PSF/400 is always required with PSF Direct attached printers. • PSF/2 DPF attached printers: PSF/400 is always required with PSF/2 DPF attached printers. Host residents fonts must also be available on the AS/400 system because PSF/2 DPF does not use printer resident fonts. • Network with destination type OS400 or OS400V2: If the spooled file refers to AFP resources, this information is passed to the remote AS/400 system. In this case, PSF/400 is only needed on the remote system. The AFP resources must be available on the target system and found in the library list. • Network with destination type S390: If the spooled file refers to AFP resources, this information is passed to the remote System/390. The AFP resources must be available on the target system. • Network with destination type PSF2: If the spooled file refers to AFP resources, this information is passed to the remote PSF/2 system. The AFP resources must be available on the target system. • Network with destination type OTHER: The AFPDS spooled file must be converted to ASCII since we mainly address an ASCII printer with a TCP/IP line printer daemon (LPD) attachment. This can be done by specifying Host Print Transform(*YES) in the remote output queue definition. The ASCII printer must support one of the following data streams: PCL4/5 or PPDS levels 3 or 4. Printing on ASCII printers using an emulator is not supported. 1.6.4 USERASCII spooled files Spooled files with a device type *USERASCII can contain any type of ASCII printer data stream (for example, PCL5, PPDS, or PostScript). The writer program just passes the spooled file to the target printer. The spooled file is not checked for validity. 26 IBM AS/400 Printing V Note: The following considerations do not address using the image print transform function (V4R2) on the AS/400 system. For printing USERASCII spooled files with the image print transform function (V4R2), see 1.6.4, “USERASCII spooled files” on page 25. You can print *USERASCII spooled files on: • ASCII printers using an emulator or host print transform • A network with destination OS400 or OS400V2 • A network with destination PSF2 • A network with destination OTHER The following types of a printing are not supported: • Printing on SCS or IPDS printers attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller) • Printing on PSF Direct attached printers • Printing on PSF DPF attached printers 1.6.5 USERASCII spooled files with image print transform The image print transform function allows you to print USERASCII spooled files in the TIFF, GIF, BMP, or PostScript Level 1 format on IPDS AFP(*YES) printers or ASCII printers. For an overview of image print transform, see 1.3.4, “Image print transform” on page 14. For detailed information, see Chapter 7, “Image print transform” on page 161. You can print *USERASCII in TIFF, GIF, BMP, or PostScript Level 1 spooled files on: • IPDS AFP(*YES) printers attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller): PSF/400 is required on the system. • ASCII printers using host print transform. • Printing on PSF Direct attached printers: PSF/400 is always required with PSF Direct attached printers. • Printing on PSF DPF attached printers: PSF/400 is always required with PSF DPF attached printers. • A network with destination OS400 or OS400V2 • A network with destination PSF2 • A network with destination OTHER These types of printing are not supported: • Printing on SCS or IPDS AFP(*NO) printers attached to a workstation controller, LAN, or IBM 5x94 (remote workstation controller) • ASCII printers using an emulator • Printing on a network with destination S390 Chapter 1. Printing on the AS/400 system 27 1.7 Implementing a printing concept When designing any printing solution, you must have the correct printer types to fit the printing requirements. Consider the following list in order of priority: 1. Print criticality 2. Print output requirements 3. Printer file device type 4. Writer supporting spooled files data streams 5. Printer requirements 6. Type of printers 7. Printer attachment methods Note: We refer to each of these points as steps in the following sections. This section also discusses using PSF/400 and IPDS printers versus host print transform and ASCII printers, and how to enhance your output presentation. 1.7.1 Print criticality The importance of a given print application to the organization, or print criticality, influences the design of the printing solution, at least for that application. Print criticality can be a measure of the importance of the document or the print volumes, or a combination of the two. A low volume application, such as check printing, may be critical because of the precise need to control the print process. With most production applications—volumes over 60 impressions per minute, the individual documents may be less critical, but the performance and stability of the entire process is key. The higher the critical nature is of the print application, the more important the fundamentals are of the printing process. These include: • Precise control over the printing process • Assurance that what is directed to be printed is printed, with adequate print management to respond and resolve error situations • Control over performance factors 1.7.2 Print output requirements The print output requirements include which type of documents have to be printed and their contents. Documents can be simple lists. Some documents may require barcodes, overlays, logos (images), or different fonts. Also consider documents that are received from Client Access/400 or other systems. Examples of typical spooled files in an AS/400 environment are: • Simple lists • Documents including different fonts (for example, a Courier and an OCR font) • Documents with barcodes • Documents with overlays and page segments (logos, images) • OfficeVision/400 documents • PC documents (Lotus AmiPro or Freelance, MS Word) 1.7.3 Printer file device type According to the print output requirements that you define (step 2), the Printer file device type parameter (DEVTYPE) can be determined. The device type parameter is used to create the spooled file in the desired data stream. For more 28 IBM AS/400 Printing V information on data streams, see 1.2, “Data streams supported on the AS/400 system” on page 3. Considering the example of the typical spooled files in an AS/400 environment (step 2), the device type parameter can be: • SCS for simple lists: Simple lists are normally printed using one font (often the default font from the printer file or printer device). • IPDS or AFPDS for documents including different fonts (for example, Courier and an OCR font): Referencing a font can be done by using the FONT DDS (data description specification) keyword if the device type parameter is IPDS or AFPDS (not supported if SCS), or by using the FNTCHRSET (Font Character Set) DDS keyword. This keyword is only supported if the device type is AFPDS. • IPDS or AFPDS for documents with barcodes: Barcodes are created by using the BARCODE DDS keyword. This keyword is only supported if the device type is IPDS or AFPDS. • SCS, IPDS, or AFPDS for documents with overlays and page segments (logos, images): An overlay, which either includes page segments or does not include them, can be referenced in the printer file (FRONTOVL and BACKOVL parameters) if the data type is SCS, IPDS, or AFPDS. The DDS keywords OVERLAY and PAGSEG can only be used if the device type is AFPDS. • SCS for OfficeVision/400 documents: The device type for OfficeVision documents is always SCS. An overlay can be associated with an OfficeVision/400 document. It must be referenced in the printer file (FRONTOVL and BACKOVL parameters). • AFPDS or USERASCII for PC documents (Lotus AmiPro or Freelance, Microsoft Word): Using the network printing function from Client Access/400, PC application outputs can be directed to an AS/400 output queue. The target printer determines the data stream to use. Output from PC applications is supported in USERASCII (ASCII data stream determined by the printer driver used) or in AFPDS (in this case, the AFP driver is used). 1.7.4 Writer supporting printer file device type The print writer used to pass the spooled file to the printer can be one of the following types: • Print writer • Print Services Facility/400 (PSF/400) • Host print transform As you can see in Figure 22, each of these options supports different data streams and can make various data stream conversions. Chapter 1. Printing on the AS/400 system 29 Figure 22. AS/400 print writer and data streams For detailed information on the printer writer, see 1.3, “Printer writer” on page 6. Depending on the print output requirements that you define (step 2) and the device type required for the different spooled files (step 3), you can determine the type of writer to use. Consider the following facts: • SCS is supported by all three options. • IPDS is supported by the print writer and PSF/400. • AFPDS is supported by PSF/400 and host print transform. • Since overlay and page segments are part of the requirements, only PSF/400 and host print transform can support them. PSF/400 supports an overlay referenced in the printer file with an SCS, IPDS, or AFPDS spooled file, and overlays and page segments referenced with the DDS keywords OVL and PAGSEG when the spooled file is AFPDS. Host print transform supports an overlay referenced in the printer file only when the spooled file is AFPDS, and overlays and page segments referenced with the DDS keywords OVL and PAGSEG when the spooled file is AFPDS. Note: Overlays referenced in the printer file with a spooled file in SCS are not supported by host print transform. • PC documents (Lotus AmiPro or Freelance, Microsoft Word) in AFPDS can be supported by PSF/400 or host print transform. If the documents are in USERASCII, they can only be supported by host print transform or the print writer and an emulator. From this analysis, you can conclude that Print Services Facility/400 can be used for all of the document type parts of the requirements, but that host print transform can also be used with the exception of any overlay referenced in a printer file with an SCS spooled file. AS/400 Applications Office Vision/400 Remote Systems AS/400, S/390 CA/400 Network Printing Spool IPDS IPDS Emulator ASCII SCS ASCII Print Writer Print Services Facility/400 Host Print Transform SCS AFPDS USERASCII LINE AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII SCS Print Writer 30 IBM AS/400 Printing V 1.7.5 Printer requirements The printer requirements help in selecting the correct printer types. The following information must be available: • Centralized, departmental, or end-user printing • Print volume • Type of forms (continuous, page) • Laser printer or impact printer (or both) • Print on other systems (remote system printing) For many AS/400 system environments, you can consider: • Centralized printing for some applications, high volume, and large spooled files. • End-user printing, low volume, some output from the same application producing large spooled files. For some end users, this mainly includes documents from PC applications. • Type of form is page, same format and paper desired for all the printers. • Laser printer, presentation quality requested. • One department uses PC applications (Office) intensively. From this information, you can conclude that you must have a laser printer for high volume and large spooled files (and eventually a backup printer) and laser printers for the end users. A PC print server can also be considered for one department. 1.7.6 Types of printers For step 4, writer supporting spooled files data streams, the conclusion is that Print Services Facility/400 can support all the print requirements and host print transform can support most of them. For step 5, printer requirements, the conclusion is that a laser printer for large volume, laser printers for end users, and a PC print server can be considered. Figure 23 shows the printer types supported according to the writer option. Chapter 1. Printing on the AS/400 system 31 Figure 23. AS/400 print writer and printer types PSF/400 can support production IPDS printers with speeds from 110 to 1002 impressions per minute (Infoprint 2000, Infoprint 3000, and Infoprint 4000). Lower volume centralized or departmental print can be handled by Infoprint 70 (cut sheet), Infoprint 62 (continuous forms), and Infoprint 60 (cut sheet). For end-user printing, PSF/400 or host print transform can be used as both support the AFPDS data stream. As one department uses PC applications intensively and to avoid too many conversions, these spooled files can be passed as USERASCII to the AS/400 system or directed to a PC print server. A good choice for network deployment is shared network printers, such as Infoprint 20, Infoprint 21, Infoprint 32, and Infoprint 40. These printers support multiple concurrent print writer sessions across AS/400 and other network clients or servers. They can be defined to the AS/400 system as both IPDS printers or ASCII (PCL) printers. Two device descriptions, one AFP and one ASCII, can be created for the same printer on the AS/400 system. Note: In V3R2, a remote output queue must be used if the printer is LAN attached because the PJL driver is not available. If a PC print server is used, this print server can be connected to an IBM Network Printer (used as an ASCII printer). The PC print server and the AS/400 system share the printer. If host print transform is used for the end-user printer, any ASCII laser printer can be used. The same printer can also be used with the PC print server. For considerations on PSF/400 and IPDS printers versus host print transform and ASCII printers, see 1.7.8.1, “PSF/400 IPDS printers versus HPT ASCII printers” on page 32. IPDS Printer AFP(*YES) ASCII Printer IPDS Printer AFP(*NO) SCS Printer IPDS IPDS ASCII Emulator ASCII Printer Print Writer ASCII SCS ASCII Print Writer Print Services Facility/400 Host Print Transform SCS AFPDS USERASCII LINE AFPDSLINE SCS IPDS AFPDS SCS AFPDS USERASCII SCS Print Writer 32 IBM AS/400 Printing V 1.7.7 Printer attachment methods On the AS/400 system, there are many different ways in which printers can be attached. For detailed information, see 1.4, “AS/400 printer attachment methods” on page 15. The LAN connection allows printer sharing for both IPDS and ASCII printers (both IPDS and ASCII printers can be LAN-attached). 1.7.8 What must be considered When deciding what printing solution to implement, consider: • PSF/400 and IPDS printers versus host print transform (HPT) and ASCII printers • How to enhance your output presentation 1.7.8.1 PSF/400 IPDS printers versus HPT ASCII printers Host print transform cannot be considered for high print volume and higher print speeds. Depending on the print criticality (see 1.7.1, “Print criticality” on page 27), using PSF/400 and IPDS printers is the recommended choice. In the discussion about Print Services Facility/400 and IPDS printers versus host print transform and ASCII printers for low print volume (end-user printing), consider the following points: • Performance: Performance considerations are magnified at higher print speeds. Where use of ASCII printers with host print transform may be acceptable at entry print speeds (6 to 20 impressions per minute), the transform workload and data stream inefficiencies will have a significant impact at higher print speeds. IBM IPDS printers currently extend to 1002 impressions per minute (IBM Infoprint 4000). Host print transform (HPT) uses more AS/400 resources, specifically when working with the AFPDS-to-ASCII transform. This is due to the AFP resources handling and remapping. When using AFP resources, PSF/400 uses resource retention on the printer. With this function, the AFP resources, overlays, page segments, and fonts remain on the printer from job to job and are only deleted when the writer is ended. Note: In V4R2, some IPDS printers can keep downloaded fonts even if the writer is ended and the printer is powered off. Host print transform clears the downloaded AFP resources at the end of each print job (that is, when you print three spooled files referencing the same overlay, the overlay is downloaded three times). This can be costly for communication lines and can cause poor performance. • Recoverability: PSF/400 has a two-way dialog with the IPDS printer. The printer can report positive acknowledgement or negative acknowledgment to PSF/400. When a spooled file is printed on an IPDS printer, the spooled file remains in the AS/400 output queue until the printer has finished printing it, and the last page is safely in the output bin. At this time, the printer sends a positive acknowledgement to PSF/400, and the spooled file is deleted from the output Chapter 1. Printing on the AS/400 system 33 queue. Even if the printer is powered off (normal recovery procedure for some end users...), the spooled file remains available on the AS/400 system. ASCII printers do not have any dialog with the AS/400 system, which means they cannot report back any information. When the transfer of the spooled file to the ASCII printer is done, the spooled file is deleted from the output queue. If for any reason the ASCII printer is powered off, the spooled file (or more than one) is (or are) lost. To circumvent this risk, the SAVE parameter can be set to “*YES” in the printer file. With this circumvention, extra work is necessary to clean up the output queue. • Fidelity: PSF/400 does not need special customization. The IPDS printer characteristics (paper loaded, resident fonts and codes pages, drawers and bins information, available IPDS towers, resolution, and so on) are passed from the printer to PSF/400 every time a print writer starts. With this information, PSF/400 can build the IPDS data stream according to the printer specifications. Thus, PSF/400 supports all printer file parameters. PSF/400 allows you to control what is done if it encounters certain formatting difficulties. With the FIDELITY(*CONTENT), PSF/400 tries to print as much as it can and sends a message to the operator if there are any problems. With FIDELITY(*ABSOLUTE), the writer holds the spooled file and does not print it if PSF/400 is unable to print it exactly as requested. Host print transform uses a manufacturer type and model table to convert SCS or AFPDS to ASCII. These tables are available on the OS/400 for many ASCII printers. Accordingly (for example, the fonts, drawers, and print positions used in the application, or to handle the unprintable border present on almost all ASCII printers), a customization of the transform table may be required. Customizing an ASCII printer may involve a trial-and-error process. For more information on customizing HPT tables, see 6.7, “Host print transform customization” on page 151. • Currency: Support and testing for IBM AS/400 printers is built into each OS/400 and PSF/400 release. This support includes new printer features and generally works with the printer as a native printer device, not as a printer emulating an older printer. Support is implemented in standard AS/400 interfaces such as printer files and DDS. ASCII printers supported by host print transform do not go through this development and integration process, resulting in certain functions or features being unsupported. Customization of the transform table may address this, but only if it is a function already supported by SCS and AFP print support. 1.7.8.2 Enhancing your output presentation Central to the implementation of a new print solution are changes in the presentation output. There are many different approaches to enhancing an application's printed output, including: 34 IBM AS/400 Printing V • Any application producing SCS output can be enhanced without application changes by: – Adding an overlay (for example, by specifying an overlay name in the FRONTOVL parameter of the printer file). For more information, see Chapter 2, “Advanced Function Presentation” on page 35. – Changing the complete document presentation (field positions, fonts, barcoding, copies, and so on) by using Advanced Print Utility (APU), part of PrintSuite for AS/400. For more information, see Chapter 3, “Enhancing your output” on page 67. – Changing the complete document presentation by using page and form definitions. For more information, see Chapter 3, “Enhancing your output” on page 67. • Any application currently producing SCS or IPDS output can be changed to AFPDS and can take advantage of the AFPDS DDS keywords. AFPDS DDS keywords, such as OVERLAY, PAGSEG, FNTCHRSET, BOX, and LINE, are part of the AS/400 printer file. Since using the printer file DDS is integrated with the application program, changes may be required to the application program. For more information, see Chapter 2, “Advanced Function Presentation” on page 35. © Copyright IBM Corp. 2000 35 Chapter 2. Advanced Function Presentation The Advanced Function Presentation (AFP) architecture has been supported on the AS/400 system since Version 2.0 Release 1.0. Significant new capabilities have been added with each new release, resulting in a comprehensive document and printing system. The architecture was formerly known as Advanced Function Printing, but its capabilities now include viewing, faxing, and archival/retrieval solutions (therefore, the change of name; AFP manages the presentation of information). This chapter provides an overview of AFP implementation on the AS/400 system and describes several different models used to produce AFP printing solutions. 2.1 Overview of AFP on the AS/400 system It is important to define some terms before we describe the AS/400 AFP model. We start by explaining what AFP is. 2.1.1 What AFP is Advanced Function Presentation is an architecture using a wide range of functions to provide capabilities such as print formatting, viewing, and archiving. Three components in the AFP architecture are: • AFP data stream (AFPDS) • AFP resources (overlay, page segment, fonts, formatting definitions) • Print management (Print Services Facility (PSF)) The AFP architecture may also be referred to as MO:DCA-P (Mixed Object Document Content Architecture for Presentation). Several data streams are supported in the AFP architecture: • AFPDS • LINE • AFPDSLINE (mixed data) Intelligent Printer Data Stream (IPDS) is not strictly part of the AFP architecture, but is closely associated with it. IPDS is the formatted, printer-specific data stream actually sent to the print device. 2.1.2 AS/400 AFP model Basically, whatever you print on the AS/400 system uses a printer file. Printer files determine how the system handles output from application programs. Printer files fit into one of two groups: 36 IBM AS/400 Printing V • Program-described printer files: These printer files do not have any field or record-level formatting. The attributes of the printer file are used to define how all the data in the spooled file is printed. Any positioning of information within the file has to be determined by the application program. Most of the printer files delivered with OS/400 and many vendor application packages use these simple printer files. An example is QPDSPLIB—the OS/400-supplied printer file used to define how pages of a library printout will appear. Although the font, print orientation, and other attributes may be modified by changing the printer file, the appearance of individual pages cannot be modified. • Externally-described printer files: These printer files have formatting defined using Data Description Specifications (DDS) external to the application program. Some of the attributes of the printer file apply to the entire data as before, while the DDS can override or enhance these options for individual records or fields (for example, a single field can be printed as a barcode). All the document elements of AFP (for example, overlays, page segments, fonts, barcodes, lines, and boxes) are supported by DDS keywords. Using these keywords to lay out pages is the standard, integrated method of defining application output on the AS/400 system. With Version 3.0, each of these keywords has been made dynamic. This means that both characteristics (for example, overlay name) and page placement (position) can be passed dynamically (as a program variable) from the application program. This enables pages of output to be precisely customized based on application data. Figure 24 shows how the printer file fits into the AFP printing process. Each step in the process is explained in the notes following the figure. Figure 24. Printer file model DDS Print program 1 4 2 5 PSF Printer File 3 7 6 Overlay Fonts Page and Form Definitions Page Segments Spool Chapter 2. Advanced Function Presentation 37 Now that you understand the basic AFP print process on the AS/400 system, let’s look at how certain AFP application enablers are used on the system. 2.1.3 APU print model Advanced Print Utility (APU) provides the capability to modify the appearance of an SCS spooled file without any application modifications. APU can be used when access or skills to modify application source code is not available. In addition, APU can be used when it is desirable to separate complex page formatting from the application program. The user can manipulate the data appearance on any AS/400 workstation or PC 5250 session. The collection of the data modification is saved in a new object (the APU Print Definition) containing the new formatting information. The print definition is used by the APU print engine to create a new spooled file. The print definition may be applied interactively, or as part of a Control Language (CL) program. It may also be applied automatically using the APU monitor function supplied with APU. This is described in the notes following Figure 25 on page 38. 1 The application program is invoked by the user to print data from the AS/400 system and to produce a spooled file. 2 The printer file parameters are used to format the data. Data Description Specifications (DDS) are optionally used to improve the appearance of the data. 3 The spooled file contains the data from the program with the appropriate formatting instructions as defined in the printer file. External resources, such as fonts or overlays, are not embedded in the spooled file. Only references to them are embedded. 4 The AFP resources are added to the print process at print time by PSF/400. 5 PSF/400 sends the print data and the resources to the printer. 6 PSF/400 manages all the printer tasks such as printer characteristics, resources management, and error recovery. 7 IPDS printers communicate with the system to provide information about the printer and the status of the print job. Notes 38 IBM AS/400 Printing V Figure 25. APU print model Print program 1 4 2 5 PSF Printer File 3 6 Spool Spool Spool Spool Spool Spool Spool Spool Overlay Fonts Page and Form Definitions Page Segments APU Monitor APU Print Engine APU Definitions Chapter 2. Advanced Function Presentation 39 Advanced Print Utility (APU) is one of the components of PrintSuite/400 with the following licensed program numbers: • 5798-AF2 for OS/400 V3R2 • 5798-AF3 for OS/400 V3R7 through V4R5 The PrintSuite/400 components can be ordered independently of each other. Note: APU and PrintSuite/400 are not available for OS/400 V3R1 or V3R6. 2.1.4 PFU print model Print Format Utility (PFU) (Figure 26 on page 40) is a part of AFP Utilities/400 (AFPU). PFU allows customers to print database file data as an AFP formatted report without any programming. A popular use of PFU is to easily define a multi-up label application using various graphical elements, barcodes, and a variety of fonts. Where overlays and page segments are AS/400 objects used for AFP printing, Print Format Definitions (PFDs) are members of specialized database files created with AFPU. With PFDs, you can define record layouts containing variable data from a database file and page layouts containing fixed data (text, boxes, lines, barcodes, graphics, and page segments). The AFP Utilities/400 licensed program is required on each system used to define or print with PFDs. PFU is a part of the AFP Utilities/400 and cannot be ordered separately. AFP Utilities/400 has the order number 5769-AF1 for OS/400 V4. 1 The application program produces an SCS spooled file on the output queue. 2 Any output queue may be used. However, the monitor cannot capture the spooled file if a print writer is attached to this output queue. 3 The users have to define which output queues are monitored. The monitor supervises all entries in the monitored output queues and invokes the APU print engine as soon as the spooled file entries match the print definition requirement. 4 At this time, the information contained in the APU print definition is used by the print engine to write a new AFP spooled file. 5 The new spooled files are placed in an output queue according to the monitor definition. This process is explained in more detail in 2.4.4, “Advanced Print Utility (APU) monitor enhancement” on page 52. 6 The AFP resources are added to the print process at print time by PSF/400. It then sends the print data and the resources to the printer. Notes 40 IBM AS/400 Printing V Figure 26. Print Format Utility print model 1 4 2 PSF 3 Overlay Fonts Page and Form Definitions Page Segments Spool Data Base PFD Definition Print Format Utility 1 After the PFD definition is created, you can invoke the print process manually in PFU or use the Print PFD Data (PRTPFDDTA) command, which is part of AFP Utilities/400. 2 PFU extracts the database data using the PFD definition and provides an AFPDS spooled file. 3 After the AFPDS spooled file is placed in the output queue, the regular print process applies. 4 The AFP resources are added to the print process at print time by PSF/400. It then sends the print data and the resources to the printer. Notes Chapter 2. Advanced Function Presentation 41 2.1.5 Page and form definitions print model Page and form definitions are standard AFP resources that separate page formatting from application program logic. Page and form definitions are developed in a source programming language that determines how the existing fields and lines of application output will be changed and composed into full AFP pages. With Version 3.0 Release 2.0 and Version 3.7 Release 7.0 and later, page and form definitions can be specified directly in the printer file. A new compiler, Page Printer Formatting Aid (PPFA)—one of the four AFP PrintSuite products, is available to compile page and form definition source modules into AS/400 objects. Page and form definition object modules can also be transferred from other systems or be created with PC design tools. Figure 27 illustrates how page and form definitions change the standard AS/400 printing process. Figure 27. Page and form definition print model 1 4 PSF 3 Overlay Fonts Page and Form Definitions Page Segments Line Data Print program Printer File FormDef/PageDef Parameter 2 1 The application print program uses a printer file similar to all other AS/400 print processes. 2 The DEVTYPE parameter (DEVTYPE *LINE) and the names of the page definition and form definition have to be set at the printer file level. 3 A spooled file containing line data is produced (this spooled file cannot be displayed). 4 PSF performs the formatting using the page definition and form definition and sends the IPDS data stream to the printer with the AFP resources when needed. Notes 42 IBM AS/400 Printing V 2.1.6 AFP toolbox print model The AFP toolbox (Figure 28) is part of PrintSuite/400. It is a collection of application program interfaces (APIs) for programmers. AFP toolbox allows developers to produce an AFP data stream while programming in the ILE C, COBOL, or RPG languages. Figure 28. AFP Toolbox print model 2.2 AFP resources AFP resources are elements that PSF can use at print time. The resources are referenced in the spool, not included in the spooled file themselves. The following resources are part of the AFP architecture: 1 PSF 3 Overlay Fonts Page and Form Definitions Page Segments Spool PRTAFPDTA Program using Toolbox APIs 2 Data 1 The application program writes an AFPDS data stream in a physical file. 2 The PRTAFPDTA command places the AFPDS as a spooled file in the output queue. 3 The AFP resources are added to the print process at print time by PSF/400. It then sends the print data and the resources to the printer. Notes Chapter 2. Advanced Function Presentation 43 • Overlays: A collection of predefined data such as lines, text, boxes, barcodes, images, or graphics. All of these elements build an electronic form that can be merged with the application data at print time. Some elements of the overlay, such as images (in this case, page segments) and graphics, are not in the overlay, but are an external resource of the overlay. • Page segments: Objects that contain images or text information. Page segments can be referenced in an overlay or can be referenced directly from an application. Page segments and all other AFP resources are compatible across system platforms with AFP support. • Fonts: A set of graphic characters of a given size and style. There are different types of font objects on the AS/400 system. Most applications can use fonts with the AS/400 system as printer-resident fonts (Font ID), a code page and character set, or as a coded font. See Chapter 4, “Fonts” on page 89, for detailed information. • Form definitions: AFP resources; specify how the printer controls the processing of a sheet of paper. A form definition can be specified in the printer file. More information about form definitions is available in Chapter 3, “Enhancing your output” on page 67. • Page definitions: AFP resources that contain a set of formatting controls to specify how you want data positioned on the page. This includes controls for the number of lines per printed sheet, font selection, print direction, and mapping fields in the data to positions on the paper. A page definition can be specified at the printer file level. 2.2.1 Creating AFP resources The overlay design method is different from one product to another. For AFP overlays, there are overlay generators on each platform. AFP Utilities/400 on the AS/400 system or the IBM AFP Printer Driver are the most popular methods. All AFP overlays are compatible across the different platforms and can be used on the AS/400 system. Several software products with a graphical interface are available and provide What You See Is What You Get (WYSIWYG) design of the different AFP resources. 2.2.1.1 Creating overlays and page segments with AFP Utilities/400 AFP Utilities/400 allows you to create overlays and page segments. You can also print data from a database file as an AFP formatted report (using the Print Format Utility). • The Overlay Utility uses the standard OS/400 interface, and allows you to create an overlay. The Overlay design function includes text, barcode, lines, boxes, shading, page segments, and graphics. • The Resource Management Utility enables you to create page segments. Most page segments are images from a PC program or from a scanning process. Several steps must be performed before a page segment object is available for the print process. Figure 29 on page 44 shows the process with the image in Image Object Content Architecture (IOCA) (part of the AFP architecture) format using AFPU. 44 IBM AS/400 Printing V Figure 29. Image process with IOCA image Page Segment AFPU IOCA Image Converter Image File Tiff, PCX, etc. Scanner 8 5 7 6 4 3 2 1 1 Scan an image with a PC-based program. Common image formats are TIFF, GIF, and PCX. 2 Scanned image may be edited with appropriate software to provide better results. 3 An image processing program with support for the IOCA image format is required. Many image processing programs can read many different formats and convert the image to another format. Another way is to place the image in a PC application and use the AFP driver to create a page segment, thereby bypassing step 6. For more information, see 5.4, “Creating a page segment” on page 126. 4 The image must be in IOCA format. 5 Send or copy the image to the shared folder or network drive. 6 Option 21 of AFPU allows you to create page segments of different sizes and orientations directly from the IOCA image. 7 A page segment object is now available in a library. 8 The page segment can be referenced in the DDS printer file. Notes Chapter 2. Advanced Function Presentation 45 2.2.1.2 Creating an overlay or page segment with the AFP driver The AFP driver allows you to create AFP resources, overlays, and page segments from any graphical PC application such as Lotus WordPro, 123, or Freelance, or Microsoft Word. For more information, see 5.4, “Creating a page segment” on page 126. 2.2.2 OEM products There are many non-IBM choices for form creation for AFP. These range from products that provide for form, font, and image editing to composition systems (for example, DOC/1 and Custom Statement Formatter) that include a form editor as part of the overall product. 2.3 AFP Utilities/400 V4R2 enhancements The following new enhancements are provided in Version 4.0 Release 2.0: • View Electronic Form on the PC (Overlay Utility) • Omit Back Side Page Layout (Print Format Utility) • Element Repeat (Print Format Utility) • Form Definition (Print Format Utility) • Tutorial • Printer Type Enhancement • Host Outline Font Support 2.3.1 View electronic form on PC (Overlay Utility) The Overlay Utility can now dynamically call the Client Access/400 (CA/400) AFP Viewer to view electronic forms on a PC window as they are being designed. The Overlay Utility creates a temporary overlay that can be accessed by the AFP Viewer. This provides a WYSIWYG view of the overlay to the user. The workstation must be a PC attached to the AS/400 system, running Client Access for Windows 95/NT V3R1M3 or later. The Client Access AFP Workbench Viewer must be installed. The user ID specified in the Client Access configuration to access the AS/400 system must be the same as the user ID used to sign on to the AS/400 session or have all object authority. If not, message CPF2189: “Not authorized to object...” is returned. Figure 30 on page 46 shows the Overlay Utility within AFP Utilities/400. A box and two page segments are placed in the overlay. 46 IBM AS/400 Printing V Figure 30. Overlay utility from AFPU When the *VIEW command is typed in the Control field at the top of the display, the AFP Viewer is invoked as soon as you press the Enter key. Figure 31 shows the AFP viewer display and the overlay. Figure 31. AFP viewer window displaying the overlay Design Overlay Columns: 1- 74 Control . . *VIEW Source overlay . . . . . VIEW *...+....1....+....2....+....3....+....4....+....5....+....6....+....7.... 001 002 003 004 005 006 *B001 --------------------+ 007 : : 008 : : 009 : : 010 : : 011 : : 012 : : 013 +-------------------------+ 014 015 016 017 *S002 *S003 More... F3=Exit F6=Text F9=Line F10=Box F11=Bar code F21=Element edit F22=Block edit F24=More keys Chapter 2. Advanced Function Presentation 47 Note: The AFP viewer cannot display a barcode in Bar Code Object Content Architecture (BCOCA) format. The AFP Utilities/400 can produce a barcode in two different ways: • Barcode for IPDS printer with BCOCA support • Barcode for IPDS printer without BCOCA support, using Presentation Text Object Content Architecture (PTOCA) support. That is, the barcode lines are drawn as text. If you want to display a barcode with the AFP Viewer, you can change the printer type in the overlay specifications to a printer type that does not support BCOCA. The online help for the printer type field provides a list of which printer types do and do not support BCOCA. 2.3.2 Print Format Utility ‘Omit Back Side Page Layout’ This option allows you to specify a back side overlay to be printed without the page layout and database data. Effectively, a blank page is inserted into the application data, and the back side overlay is printed on this page (Figure 32). Figure 32. PFU omit back side page layout 2.3.3 Element repeat Element repeat provides a function to duplicate elements multiple times by pressing a function key and specifying the number of repetitions. The distance from the first element to the next one must be defined for both the across and down directions. 2.3.4 Form definition A form definition can be selected to print a print format definition. This allows a user with a continuous forms printer to specify the form definition that AFPU uses. AFPU uses the form length and width specified in the PFD definition. Define Printout Specifications Type choices, press Enter. Copies . . . . . . . . . . . . . . 1 1-255 Print fidelity . . . . . . . . . . *CONTENT *CONTENT, *ABSOLUTE Print quality . . . . . . . . . . *STD *STD, *DRAFT, *NLQ Duplex . . . . . . . . . . . . . . Y Y=Yes, N=No Omit back side page layout . . . . Y Y=Yes, N=No Form type . . . . . . . . . . . . *STD Character value, *STD Source drawer . . . . . . . . . . 1 1-255, *E1, *CUT Front side overlay: Overlay . . . . . . . . . . . . *NONE Name, *NONE, F4 for list Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Offset across . . . . . . . . . .00 0.00-22.75 Offset down . . . . . . . . . . .00 0.00-22.75 Back side overlay: Overlay . . . . . . . . . . . . BACKOVL Name, *NONE, F4 for list Library . . . . . . . . . . . QGPL Name, *LIBL, *CURLIB Offset across . . . . . . . . . .00 0.00-22.75 Offset down . . . . . . . . . . .00 0.00-22.75 More... F3=Exit F4=Prompt F5=Refresh F12=Cancel 48 IBM AS/400 Printing V 2.3.5 Tutorial The tutorial is a collection of examples such as overlays, print format definitions, and database files. The new examples can serve as a tutorial for beginning and experienced users. To print the tutorial, type STRAFPU and press Enter. Then, from the AFP Utilities/400 menu, select option 14, and press Enter twice. 2.3.6 Printer type The printer type list has been updated with new printer types. This allows the user to choose the printer or the resolution of the printer. 2.3.7 Host outline font support AFPU was able, in the past, to use a resident printer outline font. Support for host-resident outline fonts is now part of AFPU. The user can select an outline font stored on the AS/400 system. The font is downloaded to the printer at print time. See 4.5.1, “Downloading host-resident outline fonts” on page 100, for more information. Figure 33 shows an additional field for the point size selection. The user can use the Prompt key to show the available font list. Figure 33. Change Source Overlay Font display You can select the outline font. No point size information is available for this type of font. The size is determined for you on the Change Source Overlay Font display. The same outline font is used for all sizes. You can now reduce the number of resources needed for an overlay or job (Figure 34). Change Source Overlay Font Font number . . . . . . . . : 1 Font type . . . . . . . . . : 3 Code page and font character set Type choices, press Enter. Code page . . . . . . . . . T1V10037 Name, F4 for list Font character set . . . . . CZH200 Name, F4 for list Point size . . . . . . . . . 1 0.1-999.9, *NONE Text 'description' . . . . . HELVETICA LATIN1-roman med F12=Cancel Chapter 2. Advanced Function Presentation 49 Figure 34. Select Font Character Set display 2.4 Advanced Print Utility (APU) enhancements Advanced Print Utility (APU) provides an easy way to modify the appearance of spooled data. You can display the spooled data and move the cursor to define an action on this data portion. The new function is now part of APU and versions of APU at V3R2, V3R7, and V4R1 can be refreshed with the same functions. The customer must re-order APU and will receive the refresh version free of charge. The most important function is a new monitor that provides an excellent integration of APU. The following new functions are discussed in this chapter: • Duplex • Multiple text • Outline font • APU monitor enhancement 2.4.1 Duplex In the first version, APU was able to place an overlay at the back of the paper sheet. This allowed the user to take advantage of the duplex option of the printer to place a constant electronic form. Variable data could not be printed on the back side of the paper. The new APU duplex option (Figure 35 on page 50) allows you to print data at the front and at the back of the paper sheet to take full advantage of the printer duplex option. Select Font Character Set Position to . . . . . . . Starting characters Type option, press Enter. 1=Select Font Character Opt Set Library Text C02079B0 QFNTCPL PROPTR EMUL 6 CPI ROMAN BOLD ULTRA-EXP 9-PT C02079G0 QFNTCPL PROPTR EMUL 9 PT ROMAN BOLD ULTRA-EXP 9-PT C02079L0 QFNTCPL PROPTR EMUL 5 CPI SMALL ROMAN BOLD ULTRA-EXP 4-PT CZH200 QFNTOLNLA1 HELVETICA LATIN1-roman med CZH300 QFNTOLNLA1 HELVETICA LATIN1-italic med CZH400 QFNTOLNLA1 HELVETICA LATIN1-roman bold CZH500 QFNTOLNLA1 HELVETICA LATIN1-italic bold CZN200 QFNTOLNLA1 TIMES NEW ROMAN LATIN1-roman med CZN300 QFNTOLNLA1 TIMES NEW ROMAN LATIN1-italic med CZN400 QFNTOLNLA1 TIMES NEW ROMAN LATIN1-roman bold CZN500 QFNTOLNLA1 TIMES NEW ROMAN LATIN1-italic bold F5=Refresh F12=Cancel 50 IBM AS/400 Printing V Figure 35. APU duplex display Consider these points: • If duplex printing is enabled, the Back Overlay field must contain the value *NONE because it cannot print a constant back overlay. • If more than one copy (original page) is required in a page format, duplex printing is not possible because there are never two consecutive pages of the same “copy”. 2.4.2 Multiple Text Mapping APU allows you to define or select a part of a line in the spooled file and map it as a field. You can change the attribute of this mapped area and define a position. APU calculates the actual position automatically. You can change the value to define a new print position. Multiple Text Mapping allows you to place the same mapped data up to four times. An example is if your customer document includes a five-line address. You can print the address a first time, and also print it (or a part) again on the same sheet of paper four subsequent times. The Edit Text Mapping display was modified and shows which entry is actually the Multiple Text entry (Figure 36). SET PAGE LAYOUT OPTIONS Print Definition. . .: SAMPLE Page Format. . . . : *DEFAULT Library. . . . . . . : MYLIB Copy. . . . . . . . : *ORIGINAL Type choices, press Enter. Input drawer. . . . . *DEFAULT *DEFAULT, 2, 3, 4 Default line increment *PRTDEF *INCH *PRTDEF, *INPUT, Value Default Column inc. . . *PRTDEF *INCH *PRTDEF, *INPUT, Va lue Page length. . . . , . "PRTDEF "INCH *PRTDEF, *INPUT, Value Page width . . . . *PRTDEF *INCH *PRTDEF, *INPUT, Value Top margin (down). . "PRTDEF *INCH *PRTDEF, 0, Value Left margin (across)'. *PRTDEF *INCH *PRTDEF, 0, Value Page orientation... *PRTDEF *PRTDEF, *INPUT, 0, 90... Duplex printing.... 1=Yes, 2=Tumble Back Overlay..... *NONE *NONE, Name, F4 for list Position across... *INCH 0, Value Position down.... *INCH 0, Value F3=Exit F4=Prompt F12=Cancel F22=Set Units Chapter 2. Advanced Function Presentation 51 Figure 36. Multiple Text (Part 1 of 2) Figure 37 shows the second target. All attributes, position, font, rotation, and color may be different from one target to the other one. Figure 37. APU Multiple Text (Part 2 of 2) Note these restrictions: • The Length field may only be changed on the first target and is protected when the second, third, or fourth target is shown. • The F15=Repeat function key is not enabled if more than one target is specified. • The F22=Set Units function key is only enabled when the first target is shown and is hidden when the second, third, or fourth target is shown. • The F16=Delete function key deletes the entire mapping when the first target is shown. When pressed at the second, third, or fourth target, the additional target is removed, but at least the first mapping is still there. Edit Text Mapping Type Choices, press Enter. From Row / Column : 20 / 15 Mapping . . . : 1 / 2 Length . . . . . : 8 Position across . : 15 *COL Value Position down . . : 20 *ROW Value Font Family . . . : *PRTDEF *PRTDEF, Value F4 for list Point Size. . . : *CALC, Value Bold . . . . . : 1=Yes Italic. . . . . : 1=Yes Rotation. . . . . : *DEFAULT *DEFAULT, 0, 90,180, 270 Color . . . . . . : *PRTDEF *PRTDEF, Value F4 for list F4=Prompt F12=Cancel F16=Delete F22=Set units More... Type Choices, press Enter. From Row / Column : 20 / 15 Mapping . . . : 2 / 2 Length . . . . . : 8 Position across . : 31 *COL Value Position down . . : 67 *ROW Value Font Family . . . : *PRTDEF *PRTDEF, Value F4 for list Point Size. . . : *CALC, Value Bold . . . . . : 1=Yes Italic. . . . . : 1=Yes Rotation. . . . . : *DEFAULT *DEFAULT, 0, 90, 180, 270 Color . . . . . . : *PRTDEF *PRTDEF, Value F4 for list Bottom F4=Prompt F12=Cancel F16=Remove additional target 52 IBM AS/400 Printing V 2.4.3 Outline font support The AS/400 system can download an outline font to IPDS printers. The new version of APU takes advantage of this technology and simplifies the font handling. After the outline fonts are installed (see 4.4, “How fonts are installed” on page 96), the font database must be updated using the following command: CALL QAPU/QYPUSYNC Now you can select an outline font from the Work with Font display (Figure 38). Figure 38. AFP outline fonts in APU 2.4.4 Advanced Print Utility (APU) monitor enhancement The APU monitor (Figure 39) is part of APU and provides a good way to integrate the APU print definition in your environment. The first version of the monitor was limited in its capabilities. The new monitor provides a major enhancement of APU with a lot of new functions and removes restrictions such as: • Spooled file name and APU print definition had to be the same. • The SCS spooled file was set in the hold status only. • All APU spooled files were placed in one unique output queue. An APU print definition is required to use the monitor. An example of how to provide a print definition is presented in 3.2.3, “Creating the print definition” on page 72. The user can now define which elements are relevant for the spooled file selection and what happens to the original SCS spool after APU processing. They can also take more control of the processing themselves. All of these parameters are grouped in an Action. When the monitor finds an action that corresponds with “Selection for input spooled file” (first action sequence), all other sequences from the same action are applied. The action sequences are: Work with Fonts Domain . . . . . . . . : *ALL *USR, *SYS, *ALL Type Options, press Enter. 1=Add 2=Change 4=Delete 5=Details Font Opt Font family Size Style char. set Code page Domai TIMES NEW ROMAN 30 Bold-Italic C0N500T0 *DEFAULT *SYS TIMES NEW ROMAN 36 Normal C0N200Z0 *DEFAULT *SYS TIMES NEW ROMAN 36 Italic C0N300Z0 *DEFAULT *SYS TIMES NEW ROMAN 36 Bold C0N400Z0 *DEFAULT *SYS TIMES NEW ROMAN 36 Bold-Italic C0N500Z0 *DEFAULT *SYS TIMES NEW ROMAN Outl *V Normal CZN200 *DEFAULT *SYS TIMES NEW ROMAN Outl *V Italic CZN300 *DEFAULT *SYS TIMES NEW ROMAN Outl *V Bold CZN400 *DEFAULT *SYS TIMES NEW ROMAN Outl *V Bold-Italic CZN500 *DEFAULT *SYS F3=Exit F5=Refresh F12=Cancel Chapter 2. Advanced Function Presentation 53 • Selection for input spooled file • Action for input spooled file • Actions for output spooled file Figure 39. APU monitor 2.4.4.1 Monitor example Imagine the following customer environment: Three different output types are provided in three different output queues (OUTQs). Two printers are available, and we want to set the monitor with the following requirements: • System output (QSYSPRT) must not use an APU print definition. • All jobs in OUTQ1 must be sent to PRT01. • All jobs in OUTQ2 and OUTQ3 must be sent to PRT02. • Application jobs APP01 and APP02 must be sent with a print definition “SAMPLE” applied. Spool Spool Spool Spool Spool Spool Spool Spool 1 Action Input selection Input action Output action Action 1 3 4 5 Action 2 Action 3 Action... 2 6 1 The monitor is invoked each time a spooled file arrives in a monitored output queue or if the spooled file status in a monitored queue changes to *RDY. Spooled files with other status codes are not processed. 2 The monitor checks the input selection from each action rule in a sequential manner. 3 As soon as a spooled file matches the action input selection, the input and output actions are performed. The following actions are ignored. The examples later in this chapter describe how you can create monitor actions. 4 The input action is applied after the selection matches a spooled file. The action can be different according to whether APU can complete the job successfully. 5 The user can define up to 16 output actions. This allows you, for example, to use several different APU print definitions for the same spooled file. 6 One or more spooled files are placed in one or more output queues. Notes 54 IBM AS/400 Printing V • The application's original spooled files must be placed in the OUTQ “SAVE”. • The original QSYSPRT spooled files must be deleted. Figure 40 shows the original spooled files before monitoring. The numbers in the figure are used to identify the spool and actions across the different figures of this example. Figure 40. APU monitor example: Before processing • Monitor actions example In the example, we define two groups of spooled files: the application spooled files and the QSYSPRT spooled files. Only the application spooled files need an APU print definition. In this case, we want to define the actions for the application spooled files first and then the action for the QSYSPRT spooled files. We can say that all spooled files that are not eligible for APU are moved following the QSYSPRT spooled file actions. Figure 40 shows which parameters must be defined for each action in the order of the action. The monitor uses the Input selection parameters of the first action to identify whether the spool and selection match. If the input selection parameters do not match the spooled file, the monitor takes the next PRT01 PRT02 OUTQ1 OUTQ2 OUTQ3 SAVE A B B 1 2 4 C C B A 3 QSYSPRT (QSYSPRT) = A APPLICATION (APP01) = B APPLICATION (APP02) = C 1 All QSYSPRT spooled files from OUTQ1 must be moved to OUTQ PRT01. 2 All QSYSPRT spooled files from all other OUTQs must be moved to OUTQ PRT02. 3 A print definition is applied to all application spooled files coming into OUTQ1. A new APU spooled file (result of the APU processing) is placed in the output queue PRT01. The original SCS spooled file is moved into OUTQ SAVE. 4 A print definition is applied to all application spooled files coming into all other OUTQs. A new APU spooled file (a result of the APU processing) is placed in the output queue PRT02 for each original spooled file. The original SCS spooled file is moved into OUTQ SAVE. Notes Chapter 2. Advanced Function Presentation 55 action. As soon as the input selection parameters match the spooled file, all action sequences, such as “Input action” and “Output actions” proceed. The numbers in Table 1 correspond with Figure 40. Table 1. APU monitor: Action example Many other options are possible for each action. You can decide, for example, to delete the original spooled files after processing or hold the spooled files. These options are described later in this section. • Example for output queue after processing In Figure 41 on page 56, you can see that the two QSYSPRT spooled files (A) are in the correct output queues, and all original application spooled files are in output queue SAVE. The new AFPDS spooled files (outcome from APU processing) are placed in the output queues PRT01 and PRT02, depending on where the original was. Action Input selection Input action Output action Action for spool 3 File = APP* OUTQ = Outq1 Success = *outq OUTQ = SAVE Failure = *hold Prtdef = Sample OUTQ = PRT01 Action for spool 4 File = APP* OUTQ = *all Success = *outq OUTQ = SAVE Failure = *hold Prtdef = Sample OUTQ = PRT02 Action for spool 1 File =*all OUTQ = Outq1 Success = *outq OUTQ = PRT01 Failure = *hold Prtdef = *none Action for spool 2 File = *all OUTQ = *all Success = *outq OUTQ = PRT02 Failure = *hold Prtdef = *none 3 Action for the application spooled files in OUTQ1 4 Action for all other application spooled files in all monitored OUTQs 1 Action for all other spooled files in OUTQ1 2 Action for all other spooled files in all other OUTQs Notes 56 IBM AS/400 Printing V Figure 41. APU monitor example: After processing If processing for one spooled file fails, the original spooled file stays in the output queue in *HOLD status following the FAILURE parameter. 2.4.4.2 Using the APU monitor The following sections can help you set up the APU monitor in your environment. Several configuration steps are needed: 1. Specify the queues to be monitored. 2. Configure the APU monitor. 3. Start the APU monitor. 4. Stop the APU monitor. A minimum of one action must be defined for the monitor. All *DEFAULT parameters can be used. This action provides compatibility with the first monitor. The APU main menu is shown in Figure 42. PRT01 PRT02 OUTQ1 OUTQ2 OUTQ3 SAVE 1 2 4 C B 3 QSYSPRT (QSYSPRT) = A APPLICATION (APP01) = B APPLICATION (APP02) = C B B C B A B C A C B 3 4 1 The QSYSPRT spooled file from OUTQ1 is in the output queue PRT01. 2 All QSYSPRT spooled files from the other OUTQs are in the output queue PTR02. 3 The original application SCS spooled files from OUTQ1 are in the output queue SAVE. New AFPDS spooled files have been placed in the output queue PRT01. This new spooled file is the result from APU after applying the print definition. 4 All other original application SCS spooled files from all other OUTQs are placed in the output queue SAVE. New AFPDS spooled files have been placed in the output queue PRT02. These new spooled files are the result from APU after applying the print definition. Notes Chapter 2. Advanced Function Presentation 57 Figure 42. APU main menu 2.4.4.3 Specifying the queues to be monitored The first task is to define which OUTQs must be monitored. The Work with APU Monitor window is shown in Figure 43. Now you can add or remove OUTQs in the list. You need to add only the queue where the spooled file action is performed with an APU print definition. If a spooled file comes in other OUTQs, no action is performed from the APU monitor. After all queues are added, you need to configure the APU monitor actions. Figure 43. Work with APU Monitor 2.4.4.4 Configuring the APU monitor action This section describes each part of a monitor action. Each action has the following three parts: APU IBM Advanced Print Utility Select one of the following: Build and Test APU Print Definitions 1. Work with Print Definitions 2. Work with Spooled Files Run APU in Batch Mode 3. Work with APU Monitor 4. Start APU Monitor 5. End APU Monitor Configure APU 6. Set APU Defaults 7. Work with Fonts 8. Configure APU Monitor Action Selection or command ===> Work with APU Monitor APU Monitor status . : Active The output queues in the list are currently monitored by APU Type options, press Enter. 1=Add 4=Remove Output Opt queue Library Text __ OUTQ1 QGPL Input OUTQ1 OUTQ2 QGPL Input OUTQ2 OUTQ3 QGPL Input OUTQ3 F3=Exit F5=Refresh F12=Cancel 58 IBM AS/400 Printing V • Selection for input spooled file • Action for input spooled file • Action for output spooled file The Configure APU Monitor Action display (Figure 44) allows you to create, change, copy, and delete actions. Each action is performed in the sequence shown on the display by the APU print engine. Note: If you want the monitor to work in a similar manner to the first version, a minimum of one action must be defined for the monitor. All *DEFAULT parameters can be used. This action only provides compatibility with the first monitor. You must use Option 1 (Add), but you do not need to define an entry. You must give a sequence number (for example, 10) and text (for example, “Action for compatibility mode”). Press Enter, and the action is created. Figure 44. Configure APU Monitor Action display The F22 key can be used to renumber the entries automatically. The renumbering uses an increment of 10 unless the number of records is greater than 999. In this case, the increment is calculated depending on the number of records. At run time, the monitor retrieves the SCS spooled file attributes and tries to find a matching entry. The monitor evaluates the entries in the order of the user-entered sequence numbers. As soon as the monitor finds a match, it processes the spooled file according to the rest of the action information. If it does not find a match in the table, the spooled file cannot be processed, a message is sent to the monitor's job log, and the spooled file stays in the OUTQ. 2.4.4.5 Creating an action group entry As soon you create or modify an action, the screen shown in Figure 45 appears. You can select one or more action entries. The print engine performs all three entries for each action. Configure APU Monitor Action Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete Opt Sequence Text _1 10 Qsysprt spool in OUTQ1 20 Qsysprt spool in all other OUTQ's 30 QPJOB spool in OUTQ1 40 QPJOB spool in all other OUTQ's 50 All other spool in OUTQ1 60 All other spool in all other OUTQ's F3=Exit F5=Refresh F12=Cancel F22=Renumber Sequence Chapter 2. Advanced Function Presentation 59 Figure 45. Selecting one or more action entries 2.4.4.6 Defining selection criteria for the input spooled file The first display (Figure 46) is used to define selection criteria for the input spooled file. In other words, this display is used to select the SCS spooled file that is processed as input. From this display, the user can decide which spooled file attributes the monitor should use to match an SCS spooled file. When the APU Monitor is running, it looks for a file or files with the attributes that are provided on this display. If APU finds a match between the attributes you enter here and an input spooled file, it processes both entries: Action for Input Spooled File and Action for Output Spooled File. Figure 46. Define Selection for Input Spooled File display Create Action Entry Type choices, press Enter. Sequence . . . . . . . 10 Number Text . . . . . . . . . QSYSPRT spool in OUTQ1 Type options, press Enter. 1=Select Opt Function 1 Define selection for input spooled file 1 Define action for input spooled file 1 Define action for output spooled file F12=Cancel Define Selection for Input Spooled File Sequence . . . . . . : 30 Text . . . . . . . . : QPJOB spool in OUTQ1 Type choices, press Enter. File . . . . . . . . . QPJOB* Name, Generic*, *ALL Output queue . . . . . OUTQ1 Name, Generic*, *ALL Library . . . . . . . *LIBL Name, *LIBL User . . . . . . . . . *ALL User, Generic*, *ALL User Data . . . . . . . *ALL User Data, Generic*, *ALL Form Type . . . . . . . *ALL Form Type, Generic*, *ALL Program . . . . . . . . *ALL Name, Generic*, *ALL Library . . . . . . . Name, *LIBL F12=Cancel 60 IBM AS/400 Printing V The following values can be used by APU to select the input spooled file: • Spooled file name: Can be a specific name, a generic name, or *ALL. • Output queue: Can be a specific output queue, a generic name, or *ALL. • User: Can be a specific user, a generic set, or *ALL. • User Data: Can be a specific entry in the user data field, generic data, or *ALL. • Form Type: Can be a specific form, a generic form, or *ALL. • Program name: Can be a specific program, a generic program, or *ALL. 2.4.4.7 Defining the action for an input spooled file With the next entry (Figure 47), a user can define the action for an input spooled file. This allows a user to tell the monitor what to do with the original SCS spooled file after the monitor processes the spooled file. The user can give instructions to hold, delete, do nothing, or move the SCS spooled file to another output queue. These instructions can be defined differently depending on whether it is a successful completion or a failed completion from the processing. Figure 47. Define Action for Input Spooled File display APU moves the input spooled file to the output queue defined in the Success or Failure fields, depending on the result. It places the file in one of the four status conditions that were previously shown. 2.4.4.8 Defining action for output spooled file example The user can enter information on two displays (which make up an action group) that describes the tasks to be performed by the print engine. The user can define between one and 16 entries for the output spooled file, so it is possible to run several print definitions for one unique SCS spooled file. We can take the first example of the APU monitor and add the following additional requirements. Define Action for Input Spooled File Sequence . . . . . . : 30 Text . . . . . . . . : QPJOB spool in OUTQ1 Type choices for input spooled file after successful or failed processing respectively, press Enter. Success . . . . . . . . *OUTQ *NONE, *HOLD, *DELETE, *OUTQ Output queue . . . . OUT1 Name Library . . . . . . *LIBL Name, *LIBL Failure . . . . . . . . *HOLD *NONE, *HOLD, *DELETE, *OUTQ Output queue . . . . Name Library . . . . . . Name, *LIBL F12=Cancel Chapter 2. Advanced Function Presentation 61 Imagine that there is a second location (Paris). Now we must identify which document is for the local system and which one is for the other location. This is possible with the conditional option in the print definition. The user must define two different print definitions. Each uses conditional processing to select which document is in the new spooled file (each print definition produces one spooled file). For the monitor, the user must define two actions for output spooled files. Each action refers to one of the print definitions. At run time, the print engine runs both print definitions with a different output queue for each. Table 2 shows the same example with the additional output actions. Table 2. Action example Figure 48 on page 62 shows how the actions have been executed from the monitor. Due to the conditional processing of the print definition, the application spooled file has been split between the local and Paris output queues. The white spooled file represents that only the location dependent data is present. Action Input selection Input action Output action 1/2 Ouput action 2/2 Action for spool 3 File = APP* OUTQ = Outq1 Success = *outq OUTQ = SAVE Failure = *hold Prtdef = Sample OUTQ = PRT01 Prtdef = Sample2 OUTQ = REMLOC 5 Action for spool 4 File = APP* OUTQ = *all Success = *outq OUTQ = SAVE Failure = *hold Prtdef = Sample OUTQ = PRT02 Prtdef = Sample2 OUTQ = REMLOC 6 Action for spool 1 File =*all OUTQ = Outq1 Success = *outq OUTQ = PRT01 Failure = *hold Prtdef = *none Action for spool 2 File = *all OUTQ = *all Success = *outq OUTQ = PRT02 Failure = *hold Prtdef = *none 3 Action for the application spooled files in OUTQ1. An additional output action sequence is added. 5 A second print definition is applied with a different output queue. 4 Action for all other application spooled files in all monitored OUTQs. 6 An additional output section sequence is added. A second print definition is applied with a different output queue. 1 Action for all other spooled files in OUTQ1. 2 Action for all other spooled files in all other OUTQs. Note: If an empty or incorrect output action is provided, the action for the Input SCS spooled file follows the failed procedure. Notes 62 IBM AS/400 Printing V Figure 48. Spooled file location after processing 2.4.4.9 Defining an action for the output spooled file On the Define Action for Output Spooled File display (Figure 49), specify the name, library, and user-defined parameters for the program to be called by APU before, during, or after processing. PRT01 PARIS PRT02 OUTQ1 OUTQ2 OUTQ3 SAVE 1 5 4 C B 3 QSYSPRT (QSYSPRT) = A APPLICATION (APP01) = B APPLICATION (APP02) = C B B C B A B C B C B 3 6 B C C B A 2 4 1 The QSYSPRT spooled files from OUTQ1 are in PRT01 OUTQ. 2 All QSYSPRT spooled files from the other OUTQs are in PRT02 OUTQ. 3 All original application spooled files from OUT1 are placed in OUTQ SAVE after processing. A new AFPS spooled file has been placed in PRT01 for each spooled file formatted with the print definition “SAMPLE”. 5 A second AFPDS spooled file formatted with the print definition “SAMPLE2” has been placed in the output queue “REMLOC” for each spooled file. 4 All other original application spooled files from all other OUTQs are placed in OUTQ SAVE after processing. A new AFPDS spooled file has been placed in PRT02 for each spooled file formatted with the print definition “SAMPLE”. 6 A second AFPDS spooled file formatted with the print definition “SAMPLE2” has been placed in the output queue “REMLOC” for each spooled file. Notes Chapter 2. Advanced Function Presentation 63 Figure 49. Define Action for Output Spooled File display The parameters are explained here: • User exit before: The User exit before field contains the name, library, and user-defined parameters for the program to be called by the print engine before it starts to initialize the APU environment. • Print definition: These lines contain values for the library where the print definition is stored and for the run option. The following values can be entered for the run option. If you specify *NONE on the print definition field, any value you place here is ignored. – *NORMAL: This is the default value that instructs the print engine to perform all print engine phases. If this is the first or only action group defined in this entry, *NORMAL is the only valid value for that field. Therefore, on the first action group, this field may not be changed. A complete overview of the print engine phases is provided in 2.4.5, “Print engine” on page 66. – *NOCOPY: If the user wants to apply different print definitions with the same spooled file, this value instructs the print engine to skip the CPYSPL phase, or to reuse the already prepared input spooled file database instead. All other phases are performed normally. This value is only valid if specified in the second or later action group. – *REPRINT: If the user wants to apply the same print definition multiple times to the same spooled file, this value instructs the print engine to skip the CPYSPL, EXTMID, and GENAFP phases. It re-uses the already prepared output AFPDS database instead. This value is only valid if it is specified in the second or later action group. Note: It is important to understand the run option. The run option allows you to reduce the number of processing steps. This can influence the performance. Define Action for Output Spooled File Sequence . . . . . . : 30 Text . . . . . . . . : QPJOB spool in OUTQ1 Action . . . . . . . : 1 / 1 Panel . . . . . . . . : 1 / 2 Type choices, press Enter. User exit before . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Print Definition . . . SAMPLE Name, *SPOOLFILE, *NONE Library . . . . . . . *PRTDEFLIB Name, *PRTDEFLIB, *LIBL Run option . . . . . *NORMAL *NORMAL, *NOCOPY, *REPRI User exit middle . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Output device . . . . . *JOB Name, *JOB Output queue . . . . . PRT01 Name, *DEV, *SPOOLFILE Library . . . . . . . *LIBL Name, *LIBL F12=Cancel F15=Next action 64 IBM AS/400 Printing V • User exit middle: This field contains the name, library, and user-defined parameters for the program to be called by the print engine after the input spooled file has been copied to the database. • Output device: Specify the name of the device on which the spooled file is to be printed. The value *JOB causes APU to place the output spooled file in the output queue of the current device. • Output queue: Contains the name of the output queue where the spooled file is to be placed. *SPOOLFILE tells APU to place the output file in the same output queue where the input spooled file was found. *DEV has APU place the file into the output queue of the device specified in the Output device field. Figure 50. Define Action for Output Spooled File display The display shown in Figure 50 is used to specify what is to be done after processing a file: • File: The File field is the name of the output spooled file. *PRTDEF is used if you want the output spooled file to have the same name as the print definition. *SPOOLFILE is used if you want the output spooled file to have the same name as the input spooled file. • User data: The user data field specifies the character string that is attached to the output file. *PRTDEF tells APU to set the value of this field to the name of the processed print definition. *SPOOLFILE tells APU to set this character string value to the data string of the input spooled file. • Form Type: The Form Type field names the form type of the output spooled file. *PRTDEF tells APU to set the form type to the name of the processed print definition. *SPOOLFILE sets the form type of the output file to the form of the input file. • Hold: The Hold field holds a value specifying the status that the output spooled file is to have. *NO sets the value to READY; *YES sets the value to HELD. Define Action for Output Spooled File Sequence . . . . . . : 30 Text . . . . . . . . : QPJOB spool in OUTQ1 Action . . . . . . . : 1 / 1 Panel . . . . . . . . : 2 / 2 Type choices, press Enter. File . . . . . . . . . *SPOOLFILE Name, *PRTDEF, *SPOOLFILE User Data . . . . . . . *SPOOLFILE User Data, *PRTDEF, *SPOOLFILE Form Type . . . . . . . *SPOOLFILE Form Type, *PRTDEF, *SPOOLFILE Hold . . . . . . . . . *NO *YES, *NO Save . . . . . . . . . *NO *YES, *NO, SPOOLFILE Output bin . . . . . . *DEVD 1-65536, *DEVD, *SPOOLFILE User exit after . . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value F12=Cancel F15=Next action Chapter 2. Advanced Function Presentation 65 • Save: The Save field specifies what happens to the output spooled file. *NO does not save the file; *YES saves the file. *SPOOLFILE performs the same action to the output spooled file as was done to the input spooled file. • Output bin: The Output bin field identifies the output bin of the printer. *DEVD sends the output to the bin that is specified as the printer device default. *SPOOLFILE is used to specify the output bin of the input spooled file. • User exit after: The User exit after field contains the name, library, and user-defined parameter for the program to be called by APU after the output spooled file has been created. 2.4.4.10 Applying a print definition manually or with a command Option 2 of the APU main menu allows you to apply a print definition to a spooled file manually. All parameters are adapted following the new monitor capability. The APYPRTDEF command has the same capability. 2.4.4.11 Starting the APU monitor The display shown in Figure 51 allows you to start one monitor and display the number of monitors that are already started. Figure 51. Start APU monitor display Type the names of the job description and the library where it is stored. Then, press Enter to start the monitor. After you press Enter, you return to the Main menu. A message telling you that the APU Monitor is started is shown on the bottom of the display. 2.4.4.12 Stopping the APU monitor To stop the APU Monitor, return to the APU main menu, and select Option 5 (End APU Monitor). Note these considerations: • The maximum number of entries is 9999. • The maximum number of output action groups per entry is 16. APU IBM Advanced Print Utility Select one of the following: Build and Test APU Print Definitions _________________________________________________________________________ | Start APU Monitor | | | | Number of active monitor jobs . . . . . . . . . . . . . . . . . : 0| | Number of monitor jobs in job queue(s) . . . . . . . . . . . . : 0| | | | Type choices, press Enter. | | | | Job description . . . . . . . QYPUJOBD Name | | Library . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB | |_________________________________________________________________________| F12=Cancel ===> 66 IBM AS/400 Printing V 2.4.5 Print engine The following steps refer to phases of the APU print engine. They include a table of mnemonics for the current and new phases of the engine. The current release of the engine uses four phases, which are indicated at run time using status messages. The new release will have eight phases, which will be indicated in a similar way at run time. 1. Call the user exit program “before”: CURRENT: not available NEW: EXTBEF (*** --- --- --- --- --- --- ---) 2. Set up the internal environment: CURRENT: INZENV (*** --- --- ---) NEW: INZENV (=== *** --- --- --- --- --- ---) 3. Create an internal spooled file database using the Copy Spooled File (CPYSPLF) command: CURRENT: CPYSPL (--- *** --- ---) NEW: CPYSPL (=== === *** --- --- --- --- ---) 4. Call the user exit program “middle”: CURRENT: not available NEW: EXTMID (=== === === *** --- --- --- ---) 5. Process the input and create an AFPDS output database: CURRENT: GENAFP (--- --- *** ---) NEW: GENAFP (=== === === === *** --- --- ---) 6. Convert the database to a spooled file using the Print AFP Data (PRTAFPDTA) command: CURRENT: PRTAFP (--- --- --- ***) NEW: PRTAFP (=== === === === === *** --- ---) 7. Call the user exit program “after”: CURRENT: not available NEW: EXTAFT (=== === === === === === *** ---) 8. Perform a post-processing action on the SCS input spooled file: CURRENT: not available NEW: INPACT (=== === === === === === === ***) © Copyright IBM Corp. 2000 67 Chapter 3. Enhancing your output This chapter demonstrates how you can transform a standard AS/400 spooled file and enhance the output without any application modifications. The following print applications are used in this chapter: • Advanced Print Utility (APU) • Page Printer Formatting Aid (PPFA) APU and PPFA are part of the AFP PrintSuite for AS/400 family of print products that enable existing applications to take advantage of Advanced Function Presentation (AFP). In this chapter, we show a simple example of output enhancement. Both AFP and PPFA can provide far more complex formatting. Figure 52 shows a typical output from a standard AS/400 SCS spooled file. Using plain paper and the Courier font results in a dull document appearance. Figure 52. Output from a standard SCS spooled file 68 IBM AS/400 Printing V 3.1 How your print output could look Taking advantage of a laser printer and either APU or PPFA, we can produce a more attractive document (Figure 53). Figure 53. Enhanced output Only a few changes have been made to enhance the page presentation: • A Gothic font was used instead of Courier 10. • A Helvetica 14-point font was used for the invoice number. • An overlay was used comprised of: – Lines – Boxes – Logo Chapter 3. Enhancing your output 69 3.2 Using Advanced Print Utility (APU) APU provides the easiest way to modify the presentation output of an existing application, without changing that application. You can display your spooled file on the screen and modify the appearance of the data. No DDS changes or programming is required. You only need to know your application output and be familiar with the AS/400 system. This section describes a simple example with APU. 3.2.1 APU environment APU produces AFPDS spooled files from your original SCS spooled file. These may be printed on IPDS printers, and on ASCII printers using host print transform. For more information, see Chapter 6, “Host print transform” on page 137. Table 3 shows which components are required. Table 3. Component requirement for APU 3.2.2 Setting up APU After APU and all the required components are installed, you must set the APU default parameters and synchronize the font database. 3.2.2.1 Fonts APU does not use printer resident fonts. Instead, it uses AS/400-resident fonts (character sets) in either raster or outline format. See Chapter 4, “Fonts” on page Components IPDS printer HPT attached printer 1 APU YES YES 2 PSF/400 YES NO 3 IBM AFP Font Collection Recommended Recommended 4 IBM AFP Printer Driver Recommended Recommended 1 APU is part of PrintSuite/400 and must be installed on each system to create or apply an APU print definition. 2 PSF/400 is required using an IPDS printer configured as AFP=*YES. PSF/400 is not required for ASCII printer using host print transform (HPT). 3 APU uses downloaded fonts in AFP Raster or Outline format. The QFNTCPL library is supplied with OS/400 but contains fonts in 240-pel resolution. If the printer uses 300-pel resolution, font substitution occurs. The IBM AFP Font Collection contains additional fonts in both resolutions (see 4.3, “Which fonts are available” on page 93). Note: Pel is an abbreviation for picture element or pixel 4 APU provides you with the ability to draw lines and boxes. For greater functionality, you can use a tool, such as the AFP driver, to create an electronic form (overlay). For more information about the AFP driver, see Chapter 5, “The IBM AFP Printer Driver” on page 117. Notes 70 IBM AS/400 Printing V 89, for additional information about AFP fonts and how PSF/400 provides font management. Follow this process for using fonts: 1. After all your font libraries are installed, you must add them to your library list by using the ADDLIBLE command. Alternatively, you can change the QUSRLIBL system value to reference them. 2. The following command synchronizes the font database: CALL QAPU/QYPUSYNC 3. This identifies the system fonts to APU. When the database is synchronized, type the following command to access the APU menu: GO QAPU/APU The display shown in Figure 54 appears. Figure 54. APU main menu 4. Display the font list using option 7 (Work with Fonts) on the APU menu. Figure 55 shows the Work with Fonts display. Note that the Domain parameter must be changed to *ALL if you want to see all the fonts that are available to APU. APU IBM Advanced Print Utility Select one of the following: Build and Test APU Print Definitions 1. Work with Print Definitions 2. Work with Spooled Files Run APU in Batch Mode 3. Work with APU Monitor 4. Start APU Monitor 5. End APU Monitor Configure APU 6. Set APU Defaults 7. Work with Fonts 8. Configure APU Monitor Action Selection or command ===> F3=Exit F4=Prompt F9=Retrieve F12=Cancel F16=System main menu F23=Set initial menu Chapter 3. Enhancing your output 71 Figure 55. APU font list 3.2.2.2 APU default setup To setup the APU default, follow this process: 1. The Set APU Defaults display is shown when you select option 6 on the main APU menu. These defaults are used to set the print definition attributes at creation time. You cannot create a print definition if no defaults are provided. 2. Before you set or change any parameters, check that all your font resource libraries are in your library list. Otherwise, you will not have all the fonts or code pages available for selection. An example is shown in Figure 56. Figure 56. APU default Work with Fonts Domain . . . . . . . . : *ALL *USR, *SYS, *ALL Type Options, press Enter. 1=Add 2=Change 4=Delete 5=Details Font Opt Font family Size Style char. set Code page Domai GOTHIC UPPER 6 Normal C0L00GSC T1000893 *SYS GOTHIC UPPER 6 Normal C0L00GUC *DEFAULT *SYS GOTHIC UPPER 8 Normal C0L0GU15 *DEFAULT *SYS GOTHIC UPPER 10 Normal C0L0GU12 *DEFAULT *SYS GOTHIC UPPER 12 Normal C0L0GU10 T1L00FMT *SYS GOTHIC13 9 Normal C0D0GT13 *DEFAULT *SYS HELVETICA 6 Normal C0H20060 *DEFAULT *SYS HELVETICA 6 Italic C0H30060 *DEFAULT *SYS HELVETICA 6 Bold C0H40060 *DEFAULT *SYS HELVETICA 6 Bold-Italic C0H50060 *DEFAULT *SYS HELVETICA 7 Normal C0H20070 *DEFAULT *SYS HELVETICA 7 Italic C0H30070 *DEFAULT *SYS F3=Exit F5=Refresh F12=Cancel Set APU Defaults Type choices, press Enter. Unit of measure . . . . *INCH *INCH, *CM, *ROWCOL, *UNITS Decimal point character . . or , Font family . . . . . . COURIER COMP Value F4 for List Color . . . . . . . . . *DEFAULT *DEFAULT, Value F4 for List Definition library . . APUDEF Name Code Page . . . . . . . T1V10037 Name F4 for List Addl. resource libs. . AFPRSC Name Name Name Name Job description . . . . QYPUJOBD Name Library . . . . . . . *LIBL Name, *LIBL F3=Exit F4=Prompt F12=Cancel 72 IBM AS/400 Printing V 3. Choose a Font family as the default font by pressing F4 for a list. 4. We recommend that you create a separate library in which to store your print definitions. In the example, this is called APUDEF. 5. The code page depends on the language of your country. The system value QCHRID gives the character ID and code page of your system. The “Printer Resident to Host Resident Code Page Mapping” table in Appendix D of Printer Device Programming, SC41-5713, provides a conversion table between the system code page and the AFP code page. The system code page in this example is 37, and the AFP code page needed for APU is T1V10037. 6. In the Additional resource libraries parameter, type the name of the library or libraries containing your overlays and page segments. APU can only use the resources placed in these libraries. 3.2.3 Creating the print definition To create the print definition, follow these steps: 1. Select option 1 from the APU menu to access the display as shown in Figure 57. Figure 57. Work with Print Definitions display 2. Create a print definition by selecting option 1. The Create a Print Definition display is shown in Figure 58. 3. Type a name and descriptive text. Press Enter to finish creating the print definition. Work with Print Definitions Library . . . . . . . . APUDEF Name, *CURLIB Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 5=Display contents 6=Print contents 7=Rename 10=Define 12=Work with Opt Name Text 1_ _________ F3=Exit F5=Refresh F12=Cancel Chapter 3. Enhancing your output 73 Figure 58. Create a Print Definition 4. After the print definition is created, the Work with Print Definition display is shown again. Type option 10 to access “Define a Print Definition”. 5. Select a sample spooled file by taking the appropriate option. This must be an SCS spooled file of the type you want to modify. 6. Select Set Print Definition Attributes (Figure 59). Figure 59. Set Print Definition Attributes (Part 1 of 2) Note: The *INPUT value means that APU can read the SCS spooled file attributes. Many spooled files do not have the expected attributes. For example, if the width is 132 and length 66, APU interprets this as landscape orientation. You can set the required page size information in each supported value (inches, cm, and so on). If you change the Unit of Measure, APU can recalculate the values in the appropriate units. In this example, we have overwritten the page length and width values with length=70 and width=82. 7. Press Page Down or Page Forward on your keyboard to access the display shown in Figure 60 on page 74. Create a Print Definition Type choices, press Enter. Print Definition . . . ENHANCE Name Library . . . . . . . APUDEF Name, *CURLIB Multiple page Formats . *NO *YES, *NO Text . . . . . . . . . Sample APU Print Definition to enhance output F12=Cancel Set Print Definition Attributes Print Definition . . : ENHANCE Library . . . . . . : APUDEF Type choices, press Enter. Unit of Measure . . . . *CM *INCH, *CM, *ROWCOL, *UNITS Default line increment *INPUT *UNITS *INPUT, Value Default column inc. . . *INPUT *UNITS *INPUT, Value Page length . . . . . . 70 *ROW *INPUT, Value Page width . . . . . . 82 *COL *INPUT, Value Top margin (down) . . . 0 *ROW 0, Value Left margin (across) . 0 *COL 0, Value Page orientation . . . *INPUT *INPUT, 0, 90, 180, 270 Apply field attributes 1=Yes F3=Exit F12=Cancel F22=Set Units 74 IBM AS/400 Printing V Figure 60. Set Print Definition Attributes (Part 2 of 2) 8. In this example, we pressed F4 next to the Default font family field and selected a monospaced font. Press Enter to complete the “Define a Print Definition” part. The Gothic font is the default font in your Print Definition. You can select different fonts for parts of the document, which are described in the “Map Text” step later in this section. 9. Press Enter to complete, and press F3 to save and exit. 3.2.4 Working with the print definition This section shows an example where the appearance of the spooled file data is modified, and some new elements are added. The following steps are described in this section: • Work with Copies • Define the Copy Follow these steps: 1. On the Work with Print Definitions display, type 12 next to your Print Definition. The display shown in Figure 61 appears. Set Print Definition Attributes Print Definition . . : ENHANCE Library . . . . . . : APUDEF Type choices, press Enter. Default font family . . GOTHIC COMP *APUDFT, Value F4 for List Point size . . . . . 12 *CALC, Value Bold . . . . . . . . 1=Yes Italic . . . . . . . 1=Yes Default Color . . . . . *APUDFT *APUDFT, Value F4 for List Addl. resource libs. . OVERLAY Name Name Name Name F3=Exit F4=Prompt F12=Cancel Chapter 3. Enhancing your output 75 Figure 61. Work with Copies APU provides the *ORIGINAL copy. In the following section, you can modify the presentation of the data in this *ORIGINAL copy. You can then select option 3 if you want to duplicate the data (create a second copy), followed by option 10 to modify the data appearance of the second copy. Typically you might create the original copy to your exact requirements, and then make a copy of the Copy. Change some characteristic slightly (for example, have a different input drawer selected) so the second copy is printed on a different paper type (punched hole or colored paper, for example). Do not make a copy of a Copy until you are completely satisfied with the appearance of your *ORIGINAL copy. Otherwise, you may make the same changes multiple times. 2. After you select option 10, several functions are available. Select the functions shown in Figure 62 on page 76. Work with Copies Print Definition . . : ENHANCE Page Format . . . . . : *DEFAULT Library . . . . . . : APUDEF Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete 7=Rename 10=Define Opt Name Text 10 *ORIGINAL Original (first copy) F3=Exit F5=Refresh F12=Cancel 76 IBM AS/400 Printing V Figure 62. Define a Copy Note: Only some APU options are described here. More information about functions, such as data mapping and conditional processing, are available in the AS/400 APU User's Guide, S544-5351, or in AS/400 Guide to AFP and PSF, S544-5319. APU can also add such elements as boxes, lines, constant text and barcodes, page segments, and overlays. Only the options in bold are partially described in the following section. The following sections show the displays for all selected options, and do not show the Define a Copy display after each step. 3. Press Enter, and the first selected option display is shown. APU permits a different format for each copy. Change the drawer parameter on the Set Page Layout Options display to select another paper source for one of your copies (see Figure 63). Define a Copy Print Definition . . : ENHANCE Page Format . . . . . : *DEFAULT Library . . . . . . : APUDEF Copy . . . . . . . . : *ORIGINAL Type options, press Enter. 1=Select Opt Function Select a sample spooled file 1 Set page layout options 1 Define field mapping Define constants Define boxes Define page segments 1 Define overlays F3=Exit F12=Cancel Chapter 3. Enhancing your output 77 Figure 63. Set Page Layout Options 4. After you press Enter, the next option is shown. The Define Field Mapping display (Figure 64) shows the content of our SCS spooled file. APU maintains the correct line spacing of the spooled file. Figure 64. Define Field Mapping 5. In this example, we change the font of the INVOICE NR. field. Place the cursor on the I of INVOICE, and press PF14. The rest of the line appears in reverse Set Page Layout Options Print Definition . . . ENHANCE Page Format . . . . . : *DEFAULT Library . . . . . . . APUDEF Copy . . . . . . . . : *ORIGINAL Type choices, press Enter. Input drawer . . . . . 2 *DEFAULT, 1, 2, 3, 4 Default line increment *PRTDEF *CM *PRTDEF, *INPUT, Value Default Column inc. . . *PRTDEF *CM *PRTDEF, *INPUT, Value Page length . . . . . . *PRTDEF *CM *PRTDEF, *INPUT, Value Page width . . . . . . *PRTDEF *CM *PRTDEF, *INPUT, Value Top margin (down) . . . *PRTDEF *CM *PRTDEF, 0, Value Left margin (across) . *PRTDEF *CM *PRTDEF, 0, Value Page orientation . . . *PRTDEF *PRTDEF, *INPUT, 0, 90... Duplex printing . . . . 1=Yes, 2=Tumble Back overlay . . . . . *NONE *NONE, Name F4 for list Position across . . . *CM 0, Value Position down . . . . *CM 0, Value F3=Exit F4=Prompt F12=Cancel F22=Set Units Define Field Mapping Spooled file . . . . : ENHANCE Page/Line . . . . . . : 1/1 Control . . . . . . . . Columns . . . . . . . : 1 - 78 *...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+... Cedric & Marc Ltd, 64 Dream Avenue Doerfli CO 80301 Phone 012 008 1988 Fax 006 001 1992 INVOICE NR. 123456 Kaffee H. Goodlooks Customer Nr. 778899 No return Avenue 12 More... F3=Exit F11=Hide mapping F12=Cancel F13=27x132 Mode F14=Start field F15=End field F16=Delete range F24=More keys 78 IBM AS/400 Printing V video. Move the cursor to the place where your invoice number stops and press PF15. 6. Select the function to Map as Text in the pop-up window (not shown). The Map Text display appears as shown in Figure 65. Figure 65. Map Text 7. Move your cursor to the Font family field, and press the PF4 key. Select a font from the list (Helvetica 14 point was used for this example). Press Enter to return to the Define Field Mapping display. You do not need to define field mappings for the complete page. APU will print all the remaining data according to the default values in the Print Definition Attributes. You can add fixed elements such as lines, boxes (no shading), constant text, and barcodes in APU. No additional software is required. Page segments and overlays must be created with an appropriate tool, such as the AFP driver, or by using AFP Utilities/400. See Chapter 2, “Advanced Function Presentation” on page 35, for information on using these resources and Chapter 5, “The IBM AFP Printer Driver” on page 117, to create an overlay with the AFP driver. 8. You can place overlays using the display shown in Figure 66. Map Text Type choices, press Enter. From Row / Column : 5 / 9 Mapping . . . . . : 1 / 1 Length . . . . . . 31 Position across . . 2.032 *CM Value Position down . . . 2.117 *CM Value Font family . . . . *PRTDEF *PRTDEF, Value F4 for list Point size . . . *CALC, Value Bold . . . . . . 1=Yes Italic . . . . . 1=Yes Rotation . . . . . *DEFAULT *DEFAULT, 0, 90, 180, 270 Color . . . . . . . *PRTDEF *PRTDEF, Value F4 for list More... F4=Prompt F12=Cancel F22=Set Units Chapter 3. Enhancing your output 79 Figure 66. Define Overlay Positioning: Initial display 9. Select 1, and enter the overlay name. Press F4 as a prompt, if required. 10.Press PF12 to complete your Print Definition, and type 1 to save it on the confirmation display. 3.2.5 Testing the print definition To test the print definition, follow this process: 1. To test the new Print definition with our spooled file interactively, go to the APU main menu, and select option 2 (Work with Spooled Files). 2. A list of SCS spooled files is displayed. Select an appropriate spooled file (one with data in the same layout as your sample spooled file) by typing 1 next to it. You can change the output queue and the user to narrow the search if necessary. The display is shown in Figure 67. Figure 67. Apply Print Definition (APYPRTDEF) Define Overlay Positioning Print Definition . . : ENHANCE Page Format . . . . . : *DEFAULT Library . . . . . . : APUDEF Copy . . . . . . . . : *ORIGINAL Type options, press Enter. 1=Create 2=Change 3=Copy 4=Delete Position Position Unit of Opt across down measure Overlay 1 *CM (There are no overlay positioning defined) F3=Exit F5=Refresh F12=Cancel Apply Print Definition (APYPRTDEF) Type choices, press Enter. Input Spooled File . . . . . . . > ENHANCE Name Job name . . . . . . . . . . . . > PRT_ORDER Name, * User . . . . . . . . . . . . . > CEDRIC Name Number . . . . . . . . . . . . > 023810 000000-999999 Spooled file number . . . . . . > 3 1-9999, *ONLY, *LAST Print Definition . . . . . . . . *SPOOLFILE Name, *NONE, *SPOOLFILE Library Name . . . . . . . . . *PRTDEFLIB Name, *PRTDEFLIB, *LIBL Run option . . . . . . . . . . . *NORMAL *NORMAL, *NOCOPY, *REPRINT Post processing SUCCESS: Input Spooled File . . . . . . *HOLD *HOLD, *NONE, *DELETE, *OUTQ Output queue . . . . . . . . . Name Library Name . . . . . . . . Name, *LIBL Post processing FAILURE: Input Spooled File . . . . . . *HOLD *HOLD, *NONE, *DELETE, *OUTQ Output queue . . . . . . . . . Name Library Name . . . . . . . . Name, *LIBL F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display . F24=More keys 80 IBM AS/400 Printing V 3. Type the print definition name, and press Enter. The new AFP spooled file is created and the SCS spooled file is held on the output queue with a status of HLD. The creation of the AFP spooled file is indicated by a series of lines and asterisks at the bottom of the display (moving from left to right). The particular phases are described in 2.4.5, “Print engine” on page 66. 3.2.6 Printing using the APU monitor Once you test your APU print definitions interactively, you may want to use them in batch mode for production printing. The APU monitor is a facility included with APU that can monitor output queues, apply print definitions, and send the resulting AFP spooled files to other output queues, all without intervention. It is an extremely powerful part of APU, and its capabilities are extensive. Two versions of the APU monitor are available. The latest monitor is set up using option 8 on the main APU menu. 3.2.6.1 Printing using the first version of the monitor Follow these steps: 1. Type 3 on the APU menu and add the output queue to be monitored. Use an output queue without a printer writer started to it if possible. 2. Type 4 on the APU menu to start the monitor. 3. The print definition name must match the spooled file name. 4. The spooled file must be in RDY status. 5. The spooled file must be of type *SCS. 3.2.6.2 Printing with the new version of the monitor The new monitor can be used in a similar manner. You only have to create an Action entry with default attributes using option 8 from the APU main menu. You do not need to define or change the three functions shown in Figure 68. The default values are defined to be compatible with the first monitor version. For more information about the new monitor capabilities, see Chapter 2, “Advanced Function Presentation” on page 35. To create an Action entry for the APU monitor, use option 8 on the APU main menu. The display is shown in Figure 68. Chapter 3. Enhancing your output 81 Figure 68. APU with monitor After your action entry is created, you need to add the output queue and start the monitor as already described for the first monitor. As soon as a spooled file with the same name as your print definition arrives on the monitored queue, the APU spooled file is automatically produced. 3.3 Using the Page Printer Formatting Aid Page Printer Formatting Aid (PPFA) is a compiler for standard AFP print formatting resources called page definitions and form definitions. It compiles the source modules for these resources into AS/400 objects. AFP page and form definitions are source and object level compatible, so these modules can be interchanged between AFP systems and applications. Note: Do not confuse an APU print definition with the two AFP resources mentioned here. Page and form definitions are commonly referred to as “pagedefs” and “formdefs”. Table 4. Components required for this example Components IPDS printer 1 PPFA YES 2 PSF/400 YES 3 Font Collection Recommended 4 AFP Utilities/400 Optional 5 AFP Printer Driver Optional Create action entry Type choices, press Enter. Sequence . . . . . . . 10 Number Text . . . . . . . . . Use default parameter Type options, press Enter. 1=Select Opt Function Define selection for input spooled file Define action for input spooled file Define action for output spooled file F12=Cancel 82 IBM AS/400 Printing V 3.3.1 Creating a source physical file for form and page definitions You must first create a source physical file to contain the source code: 1. Create a source physical file to contain the code: CRTSRCPF FILE(MYLIB/PPFASRC1) MBR(ENHANCE) 2. Invoke the Source Entry Utility (SEU) to edit the new member (you can also use WRKMBRPDM or other commands, according to your preferences): STRSEU SRCFILE(MYLIB/PPFASRC1) SRCMBR(ENHANCE) 3. Code the form definition and page definition as shown here: 0001.00 /*-----------------------------------------------------*/ 0002.00 /* ENHANCE OUR OUTPUT WITH */ 0003.00 /* PAGE PRINTER FORMATTING AID */ 0010.00 /*-----------------------------------------------------*/ 0011.00 FORMDEF FORMJH 1 0012.00 REPLACE YES ; 0013.00 0014.00 0015.00 COPYGROUP FORM1 ; 2 0016.00 SUBGROUP COPIES 1 OVERLAY SMPOVL1 0017.00 SUBGROUP COPIES 1 OVERLAY SMPOVL2 0012.00 0013.00 0014.00 3 0015.00 SETUNITS 1 INCH 1 INCH 0016.00 0017.00 0018.00 0019.00 0020.00 1 PPFA is a part of the AFP PrintSuite/400 and does not require any additional components to create page and form definitions. The PPFA resources are usable on most other AFP systems with a Print Services Facility (PSF) installed. PSF/2 does not support Page Definitions. 2 PSF/400 is required to print with PPFA resources. The IPDS printer must be configured with AFP=*YES. See 3.3.4, “Considerations” on page 88, for more information about the restrictions of spooled files processed with PPFA resources. 3 PPFA uses downloaded fonts in AFP Raster or Outline format. The QFNTCPL library is delivered with OS/400 but contains fonts in 240-pel resolution. If the printer uses 300-pel resolution, font substitution occurs and the presentation of the data may not be as desired. See 4.6, “Font substitution” on page 101, for more information about font substitution. 4 AFP Utilities/400 can be used to create overlays that are referenced by PPFA. 5 The AFP Printer Driver can also be used to create overlays. For more information about the AFP driver, see Chapter 5, “The IBM AFP Printer Driver” on page 117. Notes Chapter 3. Enhancing your output 83 In the same source file, we can code the page definition source shown in the following examples: 0021.00 0022.00 PAGEDEF PAGEJH 0023.00 WIDTH 8.0 IN 1 0024.00 HEIGHT 11.0 IN 0025.00 DIRECTION ACROSS 2 0026.00 REPLACE YES; 0027.00 FONT NORM CR12; 3 0028.00 FONT BIG H200D0; 0029.00 0030.00 PRINTLINE 0031.00 0032.00 POSITION 0.8 IN 0.166 1 0033.00 FONT NORM 2 0034.00 REPEAT 7 3 0035.00 0036.00 PRINTLINE 1 0038.00 POSITION 0.8 IN 1.5 2 0039.00 FONT BIG 3 0041.00 1 FORMJH is the name of the form definition. REPLACE *YES is used to tell PPFA to replace the form definition if it already exists. 2 We use the Copy Group statement to provide two copies. Note that each copy has a different overlay. 3 We set the general parameter using the SETUNITS command. Notes 1 You must define the paper size in inches, millimeters, or units. 2 This specifies portrait orientation. 3 Define the font using a coded font name. “CR12” is used for most of the data. “H200D0” is used for the “INVOICE NR” in our invoice example. Notes 1 After the first input line is recognized, provide the information where the data will start to print. Inches or millimeters can be used. 2 Define the font to be used for the data using the coded font name. 3 The same formatting is used for the next seven lines of your data. Notes 84 IBM AS/400 Printing V 0042.00 PRINTLINE 0044.00 POSITION 0.8 IN 1.666 1 0045.00 FONT NORM 2 0046.00 REPEAT 38 3 This is a simple example of using page and form definitions for document formatting. The intent is to illustrate the concept, not to define the capabilities. These AFP resources are capable of producing sophisticated document output, including program logic based on the line data field content. 3.3.2 Compiling the form and page definitions Before you can use the page and form definition, you must create the AFP resource by compiling the source code. The following commands are used for this purpose: CVTPPFASRC Create a page definition and form definition. CRTFORMDF Create a form definition object. CTRPAGDFN Create a page definition object. 3.3.2.1 Creating the AFP resources You must create a physical file in which to place the compiled objects. In the following example, we use: • A physical file FORMDEF to receive the AFP form definition as a member. • A physical file PAGEDEF to receive the AFP page definition as a member. Add the QPPFA library in your library list. Otherwise, you cannot find the CVTPPFASRC command. Type CVTPPFASRC on the command line, and press PF4 (Prompt) to see the display like the example shown in Figure 69. 1 This keyword is used to define the next print line after the repetition. You must define the entire page of your existing data in the spooled file. 2 Define the position for INVOICENR. No keyword REPEAT is required if only one line or line portion is defined. 3 Now you can use the Helvetica 14 point (pt) font defined on line 0028.00. Notes 1 Define the next print position of the data. 2 Use the font NORM for the rest of the data. 3 REPEAT 38 indicates that the next 38 lines have the same formatting. Note: Chapter 3. Enhancing your output 85 Figure 69. Creating the AFP resources 3.3.2.2 Creating the form and page definition objects You need to invoke the CRTFORMDF command to create the AS/400 form definition. Type CRTFORMDF on the command line, and press PF4 to see a display like the example shown in Figure 70. Figure 70. Creating the AS/400 *FORMDF object Convert PPFA Source (CVTPPFASRC) Type choices, press Enter. File . . . . . . . . . . . . . . > PPFASRC1 Name Library . . . . . . . . . . . > MYLIB Name, *LIBL, *CURLIB Member . . . . . . . . . . . . . > ENHANCE Name Form definition file . . . . . . > FORMDEF Name, *NONE Library . . . . . . . . . . . > MYLIB Name, *LIBL, *CURLIB Page definition file . . . . . . > PAGEDEF Name, *NONE Library . . . . . . . . . . . MYLIB Name, *LIBL, *CURLIB Listing output . . . . . . . . . > *NONE *PRINT, *NONE Source listing options . . . . . *SRC, *NOSRC, *SECLVL... Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys You can create one or both objects at the same time. This example shows that both objects are compiled and placed in two different physical files. The member resulting from the compilation has the name defined in the source code (FORMJH for the form definition and PAGEJH for the page definition). A prefix is added at the front of the name depending on the type of object, F1 and P1. Note Create Form Definition (CRTFORMDF) Type choices, press Enter. Form definition . . . . . . . . > F1FORMJH Name Library . . . . . . . . . . . > MYLIB Name, *CURLIB File . . . . . . . . . . . . . . > FORMDEF Name Library . . . . . . . . . . . > MYLIB Name, *LIBL, *CURLIB Member . . . . . . . . . . . . . *FORMDF Name, *FORMDF Text 'description' . . . . . . . > 'New form definition sample' Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 86 IBM AS/400 Printing V Notice we add the F1 prefix to the name of the form definition so it picks up the correct member name. To create the page definition, type CRTPAGDFN on the command line and press PF4 to see a display like the example shown in Figure 71. Figure 71. Creating the AS/400 *PAGDFN object 3.3.3 Printing with the form and page definitions You can only use page and form definitions with LINE data or AFPDS. Line data is similar to SCS data but does not contain any SCS formatting information. You must change or override your application printer file to specify DEVTYPE=*LINE and add the form definition and page definition names. Note: DEVTYPE=*LINE is not supported for externally described printer files (DDS support). Figure 72 shows you how to change the device type in the printer file. Create Page Definition (CRTPAGDFN) Type choices, press Enter. Page definition . . . . . . . . P1PAGEJH Name Library . . . . . . . . . . . MYLIB Name, *CURLIB File . . . . . . . . . . . . . . PAGDEF Name Library . . . . . . . . . . . MYLIB Name, *LIBL, *CURLIB Member . . . . . . . . . . . . . *PAGDFN Name, *PAGDFN Text 'description' . . . . . . . Sample page definition Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 3. Enhancing your output 87 Figure 72. Change Printer File (CHGPRTF) (Part 1 of 2) Press the Page Down or Page Forward key to move to the next display, shown in Figure 73. Figure 73. Change Printer File (CHGPRTF) (Part 2 of 2) Complete your printer file modification by pressing Enter. Now you can invoke your print program. A spooled file is placed in the output queue. Because the spooled file contains only line data, you cannot display or send this spooled file. Change Printer File (CHGPRTF) Type choices, press Enter. File . . . . . . . . . . . . . . > ENHANCE Name, generic*, *ALL Library . . . . . . . . . . . *LIBL Name, *LIBL, *ALL, *ALLUSR... Device: Printer . . . . . . . . . . . *SAME Name, *SAME, *JOB, *SYSVAL Printer device type . . . . . . > *LINE *SAME, *SCS, *IPDS, *LINE... Page size: Length--lines per page . . . . *SAME .001-255.000, *SAME Width--positions per line . . *SAME .001-378.000, *SAME Measurement method . . . . . . *SAME *SAME, *ROWCOL, *UOM Lines per inch . . . . . . . . . *SAME *SAME, 6, 3, 4, 7.5, 7,5... Characters per inch . . . . . . *SAME *SAME, 10, 5, 12, 13.3, 13... Overflow line number . . . . . . *SAME 1-255, *SAME Record format level check . . . *SAME *SAME, *YES, *NO Text 'description' . . . . . . . *SAME More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Change Printer File (CHGPRTF) Type choices, press Enter. Decimal format . . . . . . . . . *SAME *SAME, *FILE, *JOB Font character set: Character set . . . . . . . . *SAME Name, *SAME, *FONT Library . . . . . . . . . . Name, *LIBL, *CURLIB Code page . . . . . . . . . . Name Library . . . . . . . . . . Name, *LIBL, *CURLIB Point size . . . . . . . . . . 000.1-999.9, *NONE Coded font: Coded font . . . . . . . . . . *SAME Name, *SAME, *FNTCHRSET Library . . . . . . . . . . Name, *LIBL, *CURLIB Point size . . . . . . . . . . 000.1-999.9, *NONE Table Reference Characters . . . *SAME *SAME, *YES, *NO Page definition . . . . . . . . P1PAGEJH Name, *SAME, *NONE Library . . . . . . . . . . . MYLIB Name, *LIBL, *CURLIB Form definition . . . . . . . . F1FORMJH Name, *SAME, *NONE, *DEVD Library . . . . . . . . . . . MYLIB Name, *LIBL, *CURLIB More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 88 IBM AS/400 Printing V 3.3.4 Considerations Creating and using page and form definitions provides powerful formatting capabilities. However, you need to consider some characteristics of this approach to formatting: • Once a page or form definition is created, the run-time support is provided by PSF/400. The compiler (PPFA) is only required on the development system. • The example in this chapter uses only a few of the capabilities of PPFA. Creating more complex applications requires greater AFP skills. • Several Business Partner products are available to design page and form definitions (see 2.2.2, “OEM products” on page 45). These products provide a graphical design interface for these resources (there is no graphical interface available on the AS/400 system). • Using LINE data with page and form definitions for formatting has the following restrictions: – You cannot copy, display, or send a spooled file produced with LINE data. – You cannot use the AFP Viewer (see 5.6.3.1, “Using the AFP Viewer” on page 132) to display the spooled file. – Spooled files formatted with page and form definitions cannot be converted to an ASCII printer data stream by the host print transform. 3.4 APU versus PPFA Table 5 shows a comparison between APU and PPFA. Table 5. APU and PPFA comparison Function APU PPFA Display the input spooled file YES NO Can draw line and box without overlay YES NO Can place overlays and page segments YES YES Display spooled file output on a terminal YES NO Support Outline Font (download) YES YES Font Character Set and Code Page support YES NO Coded font NO YES Program must be installed on each system YES NO Display SPLF with AFP viewer YES NO Print spooled file with host print transform YES NO Requires PSF/400 Only for IPDS printers YES Spooled file can be sent using SNDNETSPLF YES NO DBCS enabled NO YES © Copyright IBM Corp. 2000 89 Chapter 4. Fonts This chapter describes how fonts are specified in AFP applications and the new font enhancements for OS/400 at Version 3 Release 2 and higher. For an introduction to AS/400 font terminology, refer to Appendix D of AS/400 Printer Device Programming or Chapter 6 of AS/400 Guide to AFP and PSF, S544-5319. 4.1 Where fonts are stored Fonts are stored either in the printer (printer-resident) or on the AS/400 system (host-resident). In the latter case, they are automatically downloaded to the printer by PSF/400 when required. 4.1.1 Printer-resident fonts These fonts are usually held in the printer's non-volatile memory (NVRAM) or on a hard disk, but older printers may keep them on diskette, font cards, cartridges, or even daisy wheels. Printer-resident fonts are selected by a Font Global Identifier (FGID). For example, one version of Courier may be 011, while a version of Times New Roman is 5687. Listing printer-resident fonts is usually done by selecting a test option from the printer menus or referring to the printer operating guide. Be sure that you are viewing the font list for the appropriate data stream (for example, IPDS fonts are not available to PC applications). Similarly, AFP applications cannot make direct use of PCL or PostScript fonts. Ensure that you select the decimal value of the FGID, not the hex value that may also be listed. A page from the IPDS font list from an IBM Network Printer 17 is shown in Figure 74 on page 90. 90 IBM AS/400 Printing V Figure 74. IPDS font listing from an IBM Network Printer 17 In terms of performance, printer-resident fonts are the preferred option. Some printers, such as the IBM Advanced Function Common Control Unit (AFCCU) printers and the IBM Network Printer range, have scalable resident fonts, which means that large (or small) characters can be printed without loss of shape or clarity. The scaling process is carried out by the printer, avoiding any host processing overhead. The downside is that different printers may give different results printing the same data due to the printers' different font capabilities. OS/400 usually invokes font substitution to ensure that something is printed, but it may not be what you were expecting! An even worse situation is when a substituted font becomes the corporate standard. A new printer may subsequently print the correct font but appear to the customer to be printing the “wrong” font. 4.1.2 Host-resident fonts Host-resident fonts are stored in AS/400 system libraries shipped with the operating system. For example, QFNTCPL is a library of 240-pel IBM Compatibility fonts. There is a range of chargeable fonts available in both 240- and 300-pel density. Providing no font substitution takes place (and this can be controlled), the fidelity of the typeface and positioning of characters is guaranteed from printer to printer. The character sets and code pages are classified Chapter 4. Fonts 91 according to their characteristics so that you can soon learn to change to a different font simply by altering a single letter in the character set name, for example: C0H20080 Helvetica Latin1-Roman Medium 8-point C0H20090 Helvetica Latin1-Roman Medium 9-point C0H30090 Helvetica Latin1-Italic Medium 9-point C0H50090 Helvetica Latin1-Italic Bold 9-point Holding font resources centrally is desirable from a change management and security point of view. However, these fonts must be downloaded at the beginning of a job. This may have implications on performance; there may be a delay before the first page of the job is printed. The font resources usually remain resident in the printer for subsequent jobs and eliminate this problem. Other considerations are that host-resident raster fonts take up large amounts of disk space. Take care to include them in the users' library list (for interactive jobs) or the job's own library list (for batch jobs). Techniques you can use to minimize these issues are described in 4.5.1, “Downloading host-resident outline fonts” on page 100, and in 4.9, “Using a resource library list” on page 107. Host-resident fonts are selected by specifying a font character set and code page (FNTCHRSET and CDEPAG), for example: C0S0CR10 Courier Roman 10-point T1V10285 United Kingdom They may also be selected by using a coded font (CDEFNT), which is a specific combination of character sets and code pages, for example: X0CR07 Courier Roman 10-point (UK) X0CR07 is a reference to the same character set and code page combination previously shown. Most IBM font products are placed in libraries beginning with “QFNTxxx”, so use the following command to quickly locate them and to look for all font resources on your system: WRKLIB *ALL/QFNT* WRKOBJ OBJ(*ALL/*ALL) OBJTYPE(*FNTRSC) When you have located the font resources, you can use the Work with Font Resources (WRKFNTRSC) command. For example, the following command locates all font character set names in the QFNTCPL library: WRKFNTRSC FNTRSC(QFNTCPL/*ALL) OBJATR(FNTCHRSET) The pel density can also be determined by selecting option 5 (Display attributes). 4.2 How fonts are selected Fonts can be selected for an entire spooled file or for individual lines or fields within a spooled file. The usual place to specify the font for an entire spooled file is in the printer file. This is achieved by using the CRTPRTF, CHGPRTF, or OVRPRTF command. The relevant keywords for these commands are: • FONT Font Global Identifier (FGID). The point size (if more than one point size is available) may also be specified. 92 IBM AS/400 Printing V • FNTCHRSET Font Character Set. This can only be specified when the printer file also has DEVTYPE(*AFPDS), and is used only when printing to IPDS printers configured with AFP=*YES. A code page must also be specified. • CDEFNT Coded Font. This can only be specified when the printer file also has DEVTYPE(*AFPDS) and is used only when printing to IPDS printers configured with AFP=*YES. Varying font selection by field for line is normally done within an externally-described printer file using Data Description Specifications (DDS). The parameters previously listed are also available as DDS keywords. Unlike a standard printer file, multiple fonts may be specified together with functions such as BARCODE to print characters as a barcode and CHRSIZ to scale characters (although using an outline font and varying point size is a superior method; see 4.5.2, “Why use an outline font” on page 100). The data stream that OS/400 produces when converting and printing a spooled file is determined by the DEVTYPE parameter in the printer file. This also has an effect on font selection as shown in Table 6. Table 6. Font and data stream support Note that externally-described printer files (created using DDS) may have multiple fonts selected for different records to be printed. A printer file created with device type *SCS is restricted to the selected font for the entire spooled file. Table 6 does not include device types *USERASCII or *AFPDSLINE. The application is entirely responsible for creating the data stream in the former case. *USERASCII simply tells OS/400 to send the data to the printer “as is”. There might well be ASCII transparency commands to change fonts within the data. For AFPDSLINE, fonts are normally selected in a page definition associated with the line data. 4.2.1 Characters per inch (CPI) The CPI parameter may be specified when the FGID is not known, or where use of a particular fixed-pitch is more important than a font type style. This is usually used for SCS printers. For IPDS printers (and most SCS printers), the selected font has an implied CPI value. For monospaced fonts, you can rely on a fixed pitch for your font. For proportionally-spaced and typographic fonts, the characters per inch varies DEVTYPE parameter Font ID Font character set Coded font *SCS Yes No No *IPDS Yes No No *AFPDS Yes Yes1 Yes1 *LINE Yes Yes1 2 Yes1 2 1. Printer must be configured as *IPDS, AFP=*YES. 2. When used in a page definition applied to the line data. Chapter 4. Fonts 93 depending on the characters in your data. However, if FONT(*CPI) is specified, a particular monospaced font is used as shown in Table 7. If these default fonts are not available, a printer-resident font is substituted (see 4.6, “Font substitution” on page 101). Table 7. CPI to font relationship System-supplied printer files use FONT(*CPI). This ensures that the appearance of the output is similar regardless of the printer that is used. When you specify PAGRTT(*COR) on the printer file, the following font substitution occurs: • 12-pitch fonts are replaced with 15-pitch fonts (FGID 222). • 15-pitch fonts are replaced with 20-pitch fonts (FGID 281). • All other fonts are replaced with a 13.3 pitch font (FGID 204) with the exception of the 4028 printer, which uses a 15-pitch font (FGID 222). Vertical spacing (specified by the LPI parameter) is 70 percent of the normal spacing. 4.3 Which fonts are available Be aware that there are no 300-pel fonts supplied as a default with OS/400. Therefore, it is usually necessary to buy some font libraries unless you rely on using printer-resident fonts only. Purchased font libraries are usually restored to a QFNTxx library name using the Restore Licensed Program (RSTLICPGM) command. 4.3.1 Fonts supplied at no charge There are two types of fonts that are supplied at no charge. These fonts are: • AFP compatibility fonts: The QFNTCPL library is installed with OS/400. Therefore, the product number is the same as the operating system. The library contains the 240-pel compatibility fonts, for example Courier, Gothic, Orator, Prestige, and Proprinter Emulation. These fonts were available on older IBM printing devices, therefore the term “compatibility”, and are required for Facsimile Support/400 (Fax/400). Fax/400 emulates an IPDS printer and CPI Default font ID Name 5 245 Courier Bold Double Wide 10 011 Courier 12 087 Letter Gothic 13.3 * 204 Matrix Gothic 15 222 Gothic 16.7 400 Gothic 18 1 252 Courier 20 1 281 Gothic Text * These values are valid only for DBCS printers 94 IBM AS/400 Printing V uses only 240-pel fonts. They are mostly fixed-pitch fonts measured in characters per inch (5, 8.55, 10, 12, 13.3, 15, 17.1, 18, 20, 27 cpi), but a few are mixed-pitch characters that are approximately 12 cpi. • 300-pel euro symbol support: Support for the euro currency symbol was added at V4R3 for both 240-pel and 300-pel printer resolutions. 4.3.2 240-pel fonts available at a charge The following fonts are available, but for a charge: • 5763-FNT Advanced Function Printing Fonts/400: This product has been largely superseded by the IBM AFP Font Collection (see Table 8) but may still be a requirement if you specifically need one or more of the Sonoran font families. For new customers, purchasing the AFP Font Collection is a much better value and a preferred strategy for the reasons explained in 4.5.2, “Why use an outline font” on page 100. Table 8. 5763-FNT Advanced Function Printing Fonts/400 You may also notice QFNT00, which does not contain any fonts, but contains product control information such as the message file, copyright notices, and modification level. • 5763-FN1 Advanced Function Printing DBCS Fonts/400: These fonts are downloadable double-byte character set raster fonts. See Table 9 for an overview of these fonts. Library Feature code Family QFNT01 5051 Sonoran Serif QFNT02 5052 Sonoran Serif Headliner QFNT03 5053 Sonoran Sans Serif QFNT04 5054 Sonoran Sans Serif Headliner QFNT05 5055 Sonoran Sans Serif Condensed QFNT06 5056 Sonoran Sans Serif Expanded QFNT07 5057 Monotype Garamond QFNT08 5058 Century Schoolbook QFNT09 5059 Pi and Specials QFNT10 5060 ITC Souvenir QFNT11 5061 ITC Avant Garde Gothic QFNT12 5062 Math and Science QFNT13 5063 DATA1 QFNT14 5064 APL2 QFNT15 5065 OCR A and OCR B Chapter 4. Fonts 95 Table 9. 5730-FNI Advanced Function Printing DBCS fonts/400 • 5648-B45 AFP Font collection version 2: This is the latest version of the IBM AFP Font Collection. This includes a comprehensive set of 240-pel and 300-pel fonts, and outline font character sets and coded fonts. Support for the euro currency symbol and new languages (Thai and Lao) is also included. 4.3.3 300-pel fonts available at a charge IBM AFP Font Collection (5648-B45) is the standard font set for the AS/400 system. This is also the consolidated AFP font product, available across all of the major IBM system platforms. IBM AFP Font Collection consists of the Expanded Core fonts and the Compatibility fonts. The Expanded Core Fonts include such standard font families as Helvetica, Times New Roman, and Courier. These fonts are provided in the 240-pel, 300-pel, and outline format. The fonts come in over 48 language groups. An optional feature of IBM AFP Font Collection is Type Transformer and Utilities for Windows. This is a comprehensive “workbench” for creating and customizing fonts. The core utility provides for the conversion of any Adobe Type 1 font to an AFP font. Since TrueType fonts can be easily converted to Adobe Type 1 format, virtually any PC-based font can be converted to an AFP font. Additional utilities provide for editing individual characters within a font as well as customizing font code pages and coded fonts. Note: Type Transformer was only available initially under OS/2. In June 2000, the Windows version became available. This version is implemented with an extensive graphical interface and interactive management of font upload operations. More information on Type Transformer and Utilities for Windows can be found in 4.11, “Creating AFP fonts with Type Transformer” on page 110. In addition to Type Transformer and Utilities for Windows, a number of DBCS font sets are also available as optional features of IBM AFP Font Collection. Library Feature code Family QFNT61 5071 Japanese QFNT62 5072 Korean QFNT63 5073 Traditional Chinese QFNT64 5074 Simplified Chinese QFNT65 5075 Thai • At OS/400 V4R5, the IBM AFP Font Collection CD is shipped with new orders of Print Services Facility/400 (PSF/400). • There are several older font families, such as Sonoran, that are not part of the IBM AFP Font Collection. These are available in the 300-pel format via Programming Request for Price Quotation (PRPQ). Notes 96 IBM AS/400 Printing V 4.4 How fonts are installed Font libraries supplied on tape media are simply restored to the system by using the RSTLICPGM command or option 11 from the LICPGM menu (GO LICPGM). The IBM AFP Font Collection may be ordered on tape or CD-ROM media. • Tape media: There are over forty libraries on tape media, but it is unlikely that all will be required. Choose the appropriate libraries for your country or language. For many customers, the most preferred ones are: QFNTCDEPAG Expanded code pages QFNTCFOLA1 Latin 1 outline character sets. These are the equivalent of the IBM Core Interchange fonts, but in outline format. QFNTCPL 240-pel Compatibility fonts. This should already be on the system. It includes some code pages. QFNT300CPL 300-pel versions of the Compatibility fonts. QFNT240LA1 Latin 1 character sets, 240 pel. These may be regarded as the equivalent of the IBM Core Interchange Font set. QFNT300LA1 Latin 1 character sets, 300 pel. These may be regarded as the equivalent of the IBM Core Interchange Font set. Raster fonts such as these take up a fairly significant amount of disk space, so only add the ones you need. • CD-ROM media: If you order the AFP Font Collection on CD-ROM media (as is the case if you order the optional tools such as Type Transformer), you need to transfer the fonts to the AS/400 system in one of two ways depending on whether your system has a CD-ROM drive installed (RISC systems only). Be sure to order the AS/400 version labelled “Fonts for OS/400”. To load the fonts from the system CD-ROM drive (usually named OPT01 or similar), follow these steps: 1. Mount the CD-ROM in the drive, and make the drive ready. 2. Identify which font library you want to restore using the booklet and the Program Directory listing. For example, the 300-pel Latin 1 font character set is in CD-ROM library LA1300, and the suggested host library name is QFNT300LA1. 3. Restore your selected library using the following command (or a similar one): RSTLIB SAVLIB(LA1300) DEV(OPT01) OPTFILE('/LA1300') RSTLIB(QFNT300LA1) Note: If you have any trouble locating file names on the CD (for example, because of missing documentation), use the GO OPTICAL menu to locate them. The Work with Optical Directories (WRKOPTDIR) is the most useful because you can determine the volume ID of the CD-ROM as well as directories and file names from this one command. If you do not have a system CD-ROM drive, you must manually transfer the fonts as follows: 1. Refer to the booklet and the Program Directory listing to locate the CD-ROM directory containing the required fonts. Chapter 4. Fonts 97 2. Upload these fonts using PC Support/400 or Client Access/400 to a suitable shared folder on the AS/400 system (for example, one called “FONTS”). 3. Create a physical data file on the AS/400 system: CRTPF FILE(MYLIB/FONTFILE) RCDLEN(8192) TEXT('Physical file for temporarily receiving fonts') LVLCHK(*NO) 4. Move each required font to the physical file: CPYFRMPCD FROMFLR(FONTS) TOFILE(MYLIB/FONTFILE) FROMDOC(C0H20000) TRNTBL(*NONE) TRNFMT(*NOTEXT) 5. Create each individual font resource using the Create Font Resource (CRTFNTRSC) command as follows: CRTFNTRSC FNTRSC(QFNT300LA1/C0H40000) FILE(MYLIB/FONTFILE) TEXT('Helvetica 10-point Bold') The last two steps can be automated with CL coding. Otherwise, they must be repeated for each and every font resource. 4.4.1 Making the fonts available Printer writer jobs need to find the requested fonts. This applies to interactive and batch jobs and for spooled files sent from other systems. The preferred way to do this is to specify the required libraries in the PSF configuration object assigned to the printer device description (see 4.9, “Using a resource library list” on page 107). An alternative method is to add the required libraries to the system library list or the user library list. These are held as system values. You can view or change these system values using the following commands: WRKSYVAL QSYSLIBL WRKSYSVAL QUSRLIBL Another alternative method of accessing the font resources is to store them in any of the special font libraries (QFNT01 to QFNT19), assuming that they no longer exist. These are normally reserved for other chargeable font character sets, that is, AS/400 font licensed program products (see 4.3.2, “240-pel fonts available at a charge” on page 94). Since 300-pel and 600-pel printers have become the standard, there is much less likelihood that these older 240-pel fonts are needed. These particular libraries are appended to the system portion of the user’s library list when a print job is submitted interactively. They are a useful means of ensuring that the required fonts are always available. They do not show up in a display of the user’s library list. For batch jobs, ensure that the font libraries are in the job’s library list. Some print enabling applications, such as AFP Utilities (5769-AF1) or Advanced Print Utility (APU, 5798-AF3), need access to those font libraries when a new overlay or APU print definition is being developed. You could add the font libraries before calling the utility menu using the EDTLIBL or ADDLIBLE commands. Note 98 IBM AS/400 Printing V There is a no-charge utility available to assist in loading your IBM AFP Font Collection (5648-B45) fonts in the special (QFNT01 to QFNT19) libraries. It can be found by clicking Downloads at the AS/400 printing Web site at: http://www.ibm.com/printers/as400 You need to download the file from the Web to your PC and then use FTP to upload to your AS/400 system. This package includes two AS/400 commands to help you load and print sample fonts from the IBM AFP Font Collection (5648-113 or 5648-B45) software. • LOADFNTC: Load Font Collection • PRTFNTC: Print Font Collection Samples Both commands provide help text describing each command’s parameter. All font objects will be restored in 10 libraries on your system as shown in Table 10. Table 10. Restored font objects Failure to add font libraries before submitting a print job is a common problem. The symptoms are usually a PQTxxx error message in the QSYSOPR message log and a message similar to the following example: Character set C0A05580 could not be found or has a pel density (resolution) incompatible with the device. In this case, the message is indicating either that the character set is not present on the system, or that an object of that name exists but at the wrong pel density for the device. To correct the problem, add the library in which the object is located to the library list, or change the printer file/DDS to explicitly reference the object/library combination. The latter may be preferable for performance reasons. The operating system does not have to search through many objects in many libraries to locate the required resource. Note: The previous example illustrates that if one has both 240- and 300-pel versions of a character set, they have the same object name and must, therefore, be stored in separate libraries. See Figure 75 for an example. Description AS/400 library AFP Font Collection Code Pages QFNT01 AFP Font Collection Compatibility Coded Fonts QFNT02 AFP Font Collection 240-pel Compatibility Fonts QFNT02 AFP Font Collection 300-pel Compatibility Fonts QFNT03 AFP Font Collection 240-pel Fonts QFNT04 AFP Font Collection 300-pel Fonts QFNT05 AFP Font Collection Outline Fonts QFNT06 AFP Font Collection Coded Fonts QFNT07 AFP Font Collection Coded Fonts (4 chars) QFNT08 AFP Font Collection Outline Coded Fonts QFNT09 AFP Font Collection Outline Codes Fonts (4 chars) QFNT10 Chapter 4. Fonts 99 Figure 75. Fonts with the same object name, different libraries One of the tasks the PSF/400 printer writer job performs is to determine the printer resolution. Therefore, in the preceding example, only the second C0N20000 font resource object is selected for printing to a 300-pel printer even if the QFNT240LA1 library was higher in the library list. 4.5 Outline fonts Traditionally, fonts are stored as raster fonts. Each font is stored as a bit pattern (bitmap) for each and every character in a character set and once for every size and weight/posture (medium, bold, italic, bold italic). The bitmapped font is resolution-specific, so these large storage requirements are repeated for fonts of different pel densities. For host-resident raster fonts, there is a delay in printing while the raster sets are downloaded to the printer. An outline font is an alternative means of storing a font. Each character is stored only once for each weight or posture. The stored outline font is defined using vector mathematics to describe its shape. This means the font may be drawn by the printer at a wide range of point sizes (1 to 999 points). Outline fonts may also be referred to as scalable or vector fonts (Figure 76). Figure 76. Representation of a raster font and an outline font Previously, AFP outline fonts were only found as printer-resident fonts. Now products, such as the AFP Font Collection, contain host-resident AFP outline Work with Objects Type options, press Enter. 2=Edit authority 3=Copy 4=Delete 5=Display authority 7=Rename 8=Display description 13=Change description Opt Object Type Library Attribute Text C0N20000 *FNTRSC QFNT240LA1 FNTCHRSET Latin1-Times New Roman-Roman C0N20000 *FNTRSC QFNT300LA1 FNTCHRSET TIMES NEW ROMAN LATIN 1-ROMAN Parameters for options 5, 7 and 13 or command ===> F3=Exit F4=Prompt F5=Refresh F9=Retrieve F11=Display names and types F12=Cancel F16=Repeat position to F17=Position to F24=More keys 100 IBM AS/400 Printing V fonts and tools, such as the OS/2 Type Transformer can produce AFP outlines from Adobe Type 1 PC fonts. 4.5.1 Downloading host-resident outline fonts Version 3.0 Release 2.0 and Version 3.0 Release 7.0 introduced limited support (through program temporary fixes (PTFs)) for downloading AFP outline fonts by PSF/400. This was restricted to certain coded fonts in the IBM AFP Font Collection product. At Version 4.0 Release 2.0, scaling information for downloaded outline fonts is added in the printer file and in DDS. This is similar to the current point size parameter on the FONT keyword, except that the range is from 0.1 to 999 points. A printer that supports outline font download is required, such as an IBM AFCCU printer. Although printers, such as IBM Network Printers, use resident outline fonts, they cannot receive downloaded outline fonts. 4.5.2 Why use an outline font Outline fonts are extremely efficient in performance terms. One outline font can replace a range of raster font point sizes, thereby reducing font download time, the number of raster fonts that must be kept at the host, and the font storage requirements at the printer. They are also easy to specify (for example, an 18-point host-resident font) to be downloaded. Consider this example: OVRPRTF FILE(QSYSPRT) DEVTYPE(*AFPDS) FNTCHRSET(MYLIB/CZH200 T1V10285 18.0) You can also specify the same printer-resident font to be invoked at the printer: OVRPRTF FILE(QSYSPRT) FONT(2304 18.0) Note: You do not need to specify the data-stream device type to use a printer-resident font. Outline fonts are resolution-independent. Therefore, as printers become capable of printing at higher resolutions, the application investment in using outline fonts is maintained. Because it is the device that rasterizes the outline font sent to it, you can use the same AFP outline font sent to a printer (at 240, 300, or 600 dpi) or sent to a display at various resolutions. The fonts may be placed in a single library because they no longer have a resolution attribute. Migrating existing raster fonts may be achieved either by obtaining the equivalent IBM AFP outline font or purchasing an equivalent Type 1 scalable font (PC-based) from a font vendor and converting it to an AFP outline font using IBM Type Transformer. If the application uses older font families such as Sonoran Serif or Sonoran Sans Serif, these are similar to Times New Roman and Helvetica, but they are not identical. The reason for this is that the Sonoran fonts and other fonts were hand-tuned for best quality on 240-pel printers and cannot be converted to outline font technology. This is why IBM recommends the adoption of the strategic Expanded Core fonts (Times New Roman, Helvetica, and others). These fonts are available as host raster and outline fonts, and commonly as printer-resident outline fonts. This is particularly the case for new applications. Note that Helvetica is an equivalent of Arial, widely used in PC applications. A practical example for using outline fonts is to print large characters (for example, at 720-point (approximately 10 inches high)) in retail store applications Chapter 4. Fonts 101 or on packing carton delivery slips. Prior to using outline fonts, the two principle means of printing large characters were to use graphic symbols sets or scale printer-resident fonts using the CHRSIZ DDS keyword. For a discussion of the pros and cons of using these methods, see A.4.1, “Using GDDM fonts” on page 283. For details of their use, please refer to AS/400 Printing III, GG24-4028. Note that CHRSIZ is not supported on newer printers, such as the IBM AFCCU printers, because of the trend towards outline fonts. 4.5.3 Scalable fonts for MULTIUP and COR When the applicable PTF is applied and the QPRTVALS data area is set up as described in 10.5.4, “Using the QPRTVALS data area” on page 217, the AS/400 system will always select a scalable font for printing with MULTIUP or COR (multiple-up and Computer Output Reduction) or when the spooled file attributes specify FONT(*CPI). This only applies for IBM AFCCU printers. If the font identifier in the printer device description is between 300 and 511 (inclusive), this font is selected and scaled to an appropriate point size. If the font in the device description is not between 300 and 511, the AS/400 system uses font 304. Font 304 is a scalable Gothic font that is supported by these printers for almost all single-byte character set (SBCS) code pages except Arabic, Cyrillic Greek, Hebrew, Latin 2/3/4/5, and Symbols. Another recommended font is 416, a scalable Courier Roman font that is supported for almost all SBCS code pages except Japanese Katakana. To activate this function, ensure the printer writer is ended and then type: CHGDTAARA DTAARA(QUSRSYS/QPRTVALS (6 1)) VALUE('Y') This places the character Y in the sixth byte of the QPRTVALS data area. To change the default to something other than 304, enter: CHGDEVPRT DEVD(printername) FONT(416 12.0) See Table 11 for information on PTF support for scalable fonts with MULTIUP and COR. Table 11. PTF support for scalable fonts with MULTIUP and COR 4.6 Font substitution PSF/400 uses font substitution tables to perform font substitution. Font substitution may also be referred to as font mapping and takes one of the following forms: Version/release PTF V3R1 SF43120 V3R2 SF43431 V3R6 SF42712 V3R7 SF44664 V4R1 and above Base operating system 102 IBM AS/400 Printing V • Font ID to Font ID: This occurs when the requested font is not available on the printer but a similar one is available. This is printer-resident to printer-resident font substitution, which is the most common type of substitution. • Font ID to Font Character Set: This occurs when the target printer has no resident fonts, or when resident fonts are disabled by the Create/Change Print Services Facility Configuration (CRTPSFCFG/CHGPSFCFG) command or an equivalent WRKAFP2 command. See 4.8, “Disabling resident font support” on page 106, for details of disabling printer-resident fonts. One reason for there being no printer-resident fonts might be when the device is actually a process emulating a printer (for example, Facsimile Support/400 or the Distributed Print Facility (DPF) of Print Services Facility/2 (PSF/2)). Some older IBM printers also did not have resident fonts (for example, the 3900-1, 3825, and 3835). • Font Character Set to Font ID: This substitution of a host-resident font for a requested printer-resident font occurs only when one of the following situations is true: – The host font character set was not found and the printer supports resident fonts. – The printer does not accept downloaded fonts (most impact printers). Reasons for not finding the host font character set include: not authorized to use that font, the font was not in the user's library list, or the font exists, but at a different resolution than that of the printer. Note: A code page may be substituted in the same way as a font character set with the exception that a code page is resolution-independent. Therefore, this does not give rise to a substitution. The particular fonts substituted in the previous cases are documented in Appendix D of AS/400 Printer Device Programming. OfficeVision/400 has its own table of substituted fonts, which is documented in Setting Up Printing in an OfficeVision/400 Environment, SH21-0511. 4.6.1 Suppressing font substitution messages Normally font substitution is logged in the job log, and a message, such as the following example, is sent to the message queue defined in the printer device description (usually QSYSOPR): PQT2072 Font substitution was performed At Version 4.0 Release 2.0, these messages may be suppressed, if desired, using the FNTSUBMSG keyword on the CRTLPSFCFG or CHGPSFCFG command. The default is *YES to continue generating these messages as at present. Otherwise, you can block the messages as follows: CHGPSFCFG PSFCFG(NP17) FNTSUBMSG(*NO) Messages indicating that font substitution failed are not blocked. Chapter 4. Fonts 103 4.7 Font table customization Until recently, customers had to accept the system's internal font mapping. In addition, the use of applications, such as OfficeVision/400 (where only font IDs are specified), restricted the customer's choice of fonts. Version 3.0 Release 7.0 introduced the ability to create your own font mapping tables. These are searched before the existing system tables. This facility applies only to AFP printers, and PSF/400 is required. Tables may be created to control any or all of the following examples: • Host-resident font character set to printer-resident font ID mapping • Printer-resident font ID to host-resident font character set mapping • Host-resident to printer-resident code page mapping • Printer-resident to host-resident code page mapping Since there are several commands associated with font table customization, type the following command: GO CMDFNTTBL The display appears as shown in Figure 77. Figure 77. Work with Font Tables menu on a V3R7 system 4.7.1 Creating the font tables It is first necessary to create one or more font tables, and then add, alter, or delete entries from them. Only one of each of the four font substitution cases previously described may be created using the Create Font Table (CRTFNTTBL) command. They are assigned a system-supplied name as follows: *PHFCS Printer to Host-resident Font Character Set This creates a table named QPHFCS in the QUSRSYS library, object type *FNTTBL. CMDFNTTBL Work with Font Tables Select one of the following: Commands 1. Add Font Table Entry ADDFNTTBLE 2. Change Font Table Entry CHGFNTTBLE 3. Create Font Table CRTFNTTBL 4. Delete Font Table DLTFNTTBL 5. Display Font Table DSPFNTTBL 6. Remove Font Table Entry RMVFNTTBLE Related Command Menus 7. AFP Commands CMDAFP 8. Font Resource Commands CMDFNTRSC 9. PSF Configuration Commands CMDPSFCFG Bottom Selection or command ===> F3=Exit F4=Prompt F9=Retrieve F12=Cancel F16=Major menu (C) COPYRIGHT IBM CORP. 1980, 1996. 104 IBM AS/400 Printing V *PHCP Printer to Host-resident Code Page This creates a table named QPHCP in the QUSRSYS library, object type *FNTTBL. *HPFCS Host to Printer-resident Font Character Set This creates a table named QHPFCS in the QUSRSYS library, object type *FNTTBL. *HPCP Host to Printer-resident Code Page This creates a table named QHPCP in the QUSRSYS library, object type *FNTTBL. 4.7.2 Adding a font table entry As an example, if you want to use a host-resident font with OfficeVision/400, you must either use a printer that does not support resident fonts (these tend to be larger system printers such as the 3820 and 3835) or switch off printer-resident font support using the CHGPSFCFG command (WRKAFP2 command at Version 3.0 Release 1.0 and Version 3.0 Release 6.0). Your specified font ID is then substituted to a host-resident font according to the font tables documented in Section D.5 of AS/400 Printer Device Programming. This may not be an exact substitution (the table identifies these exceptions), or you may want to use a custom-supplied host font. To do this, you need to add an entry to the QPHFCS font table. Suppose you are using FGID 75 (Courier 12 cpi) in your OfficeVision/400 documents. This is normally substituted to C0S0CR12, which is not an exact match. If you have the Core Interchange Fonts installed on your system, you can substitute C04200B0 instead as shown in Figure 78. Figure 78. Adding a different printer-resident to host-resident font substitution Note: The WIDTH keyword in the previous command refers to the characters per inch value (12 in our example) divided into 1440. These values for the common Add Font Table Entry (ADDFNTTBLE) Type choices, press Enter. Font table . . . . . . . . . . . > *PHFCS *PHFCS, *HPFCS, *PHCP, *HPCP Printer to host font: Printer font: Identifier . . . . . . . . . . > 75 1-65535 Width . . . . . . . . . . . . > 120 1-32767, *NONE, *PTSIZE Attributes . . . . . . . . . . *NONE *NONE, *BOLD, *ITALIC... Graphic character set . . . . *SYSVAL Number, *SYSVAL Point size . . . . . . . . . . *WIDTH 1.0-999.9, *WIDTH, *NONE Host font: Font character set . . . . . . > C04200B0 Name Type . . . . . . . . . . . . . *RASTER *RASTER, *OUTLINE Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Chapter 4. Fonts 105 cpi sizes (10, 12, 15, etc.) may be listed on the printer IPDS font listing (see the example in Figure 74 on page 90). You must also ensure that any writers to printers configured as *IPDS, AFP=*YES are ended before attempting to change the font tables. Otherwise, you receive messages similar to: Cannot allocate object QPHFCS in library QUSRSYS Font table QPHFCS in library QUSRSYS not changed If this occurs, use the following command to locate which writers are still active: WRKOBJLCK OBJ(QUSRSYS/QPHFCS) OBJTYPE(*FNTTBL) You can then determine whether to end the writers immediately, or defer the font table changes to a later time. Successful addition of the font table is reported by: Font table entry added to font table QPHFCS This may be checked using the DSPFNTTBL command, which causes the following display, as shown in Figure 79, to appear. Figure 79. Displaying entries in a customized font table A final point to note in the case of OfficeVision/400 is that you are probably still restricted to monospaced (fixed-pitch) host-resident fonts because the alignment of tabs and columns is incorrect if typographic fonts (variable-spaced) are used. 4.7.3 Other font table commands The remaining commands are self-explanatory: • Change Font Table Entry • Delete Font Table • Remove Font Table Entry Display Font Table Font Table . : QPHFCS Text . . . . : Printer to host-resident mapping table Printer Graphic Host Font Character Point Font Identifier Width Attribute Set size Character Type 75 120 *NONE 1269 *WIDTH C04200B0 *RASTER Bottom Press Enter to continue. F3=Exit F12=Cancel 106 IBM AS/400 Printing V 4.7.4 Customer-defined font ranges If your operating system is prior to Version 3.0 Release 7.0, there is an alternative method to customizing font tables. This uses a new function added to Version 3.0 Release 1.0 and upwards through PTFs for customer-defined printer-resident fonts. This works as explained here: 1. Disable the printer's resident font support as described in 4.8, “Disabling resident font support” on page 106. 2. Identify up to five host-resident font character sets that you want to use. 3. Rename these to C0USERF1 through C0USERF5 using the Rename Object (RNMOBJ) command. 4. Specify any or all of the font IDs, 65501 to 65505, in your application. These are mapped to the character sets in the range C0USERF1 to C0USERF5. One use of this feature might be in OfficeVision/400 where you have a host-resident font that is actually a barcode set or a signature. You can use the preceding procedure to refer to it by a font ID. This support is enabled by the PTFs shown in Table 12. Table 12. PTF details for customer-defined font ranges 4.8 Disabling resident font support You must disable resident font support on the printer. Otherwise, normal font substitution will occur (printer-resident font to printer-resident font, as described in 4.6, “Font substitution” on page 101). To do this, follow these steps: 1. Ensure the printer writer is ended: ENDWTR PRTNP17 *IMMED 2. Use the WRKAFP2 command (V3R1 and V3R6) to display or print the current status of the data area created by WRKAFP2: WRKAFP2 DEVD(PRTNP17) PRINTONLY(*YES) Re-running the command resets all parameters to their default value unless you explicitly re-define them again. 3. Issue the WRKAFP2 command with the Disable Resident Fonts keyword enabled plus any special settings you may have already: WRKAFP2 DEVD(PRTNP17) DRF(*YES)... 4. For V3R2, V3R7, and later, use the CRTPSFCFG or CHGPSFCFG commands instead of WRKAFP2. These may be changed without affecting other settings: CHGPSFCFG PSFCFG(PRTNP17) RESFONT(*NO) Version/Release APAR PTF Cum-pak V3R1 SA54431 SF31920 6198 V3R2 SA54431 SF32128 6233 V3R6 SF55079 SF39367 - V3R7 and later Base operating system Chapter 4. Fonts 107 Note that the earlier WRKAFP2 command uses the syntax “disable resident fonts?”. The CHGSFCFG command asks: “resident font support?”. Therefore, the yes or no response is different. 4.9 Using a resource library list A PSF configuration object allows you to specify which particular libraries are searched for AFP resources (including fonts). This might be for reasons of: • Security: Can restrict libraries searched. • Performance: Searching fewer libraries is faster • Device resolution issues: AFP resources created at different pel densities can be placed in appropriate libraries. For example, there is no point in searching 240-pel font libraries when using a 300-pel printer. The PSFCFG object may define a user resource library, a device resource library, or both. The former is searched first, but may have *NONE specified, which means that only the device resource library is searched. The relevant keywords are User Resource Library (USRRSCLIBL) and Device Resource Library (DEVRSCLIBL). Figure 80. Part of a CHGPSFCFG command The example in Figure 80 shows how the command might be used with a printer that only supports 300-pel fonts from the AS/400 system. The user resource library is set to *JOBLIBL, meaning the job's current library list is searched for any AFP resources referenced. The device resource library list names three libraries, the first two containing 300-pel fonts, and the last library possibly containing AFP resources in 300-pel format and unique to that printer. Change PSF Configuration (CHGPSFCFG) Type choices, press Enter. PSF configuration . . . . . . . PSFCFG > NP17 Library . . . . . . . . . . . > QGPL User resource library list . . . USRRSCLIBL *JOBLIBL Device resource library list . . DEVRSCLIBL > QFNT300CPL > QFNT300LA1 + for more values > AFPRSCLIB IPDS pass through . . . . . . . IPDSPASTHR *NO Activate release timer . . . . . ACTRLSTMR *NORDYF Release timer . . . . . . . . . RLSTMR *NOMAX Restart timer . . . . . . . . . RESTRTMR *IMMED SNA retry count . . . . . . . . RETRY 2 Delay time between SNA retries RETRYDLY 0 Text 'description' . . . . . . . TEXT > 'PSFCFG object for IBM Networ Printer 17' More.. F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 108 IBM AS/400 Printing V If all the resources used for the print jobs are contained in a few libraries, consider setting USRRSCLIBL to *NONE so only the device resource library is searched. The PSF configuration object could be specified in one or multiple printer device descriptions, as shown in Figure 81. Figure 81. Change Device Desc (Printer) (CHGDEVPRT) 4.10 Font capturing PSF/400 can download fonts to certain IPDS printers when they are configured as *IPDS, AFP=*YES in their device description. Since Version 3.0 Release 1.0, these fonts are stored across job boundaries on the basis that the next job is likely to use them. This is known as font caching. Once the PSF/400 writer is ended, all AFP resources in the printer (including fonts) are deleted. With Version 4.0 Release 2.0, a printer can hold these fonts after the writer is ended, if so desired. This also applies if the printer is subsequently powered off. Printing performance is improved because the fonts no longer need to be downloaded. This is especially beneficial to users of double-byte fonts because these fonts are large in size. This process is known as font capturing. Two steps are necessary to implement font capturing: 1. Mark the desired font resources as eligible for capture. 2. Define the printer to be capable of font capturing. 4.10.1 Font resources eligible for capture Not all fonts contain the necessary information in the internal structured fields to permit them to be uniquely identified. If fonts have this information, they are said to be marked. Examples of tools that can mark fonts are APSRMARK (contained within PSF/MVS) and Type Transformer (available as an option within the IBM Font Collection). Version 4.0 Release 2.0 of OS/400 also has this function. Change Device Desc (Printer) (CHGDEVPRT) Type choices, press Enter. User-defined object: Object . . . . . . . . . . . . > NP17 Name, *SAME, *NONE Library . . . . . . . . . . > QGPL Name, *LIBL, *CURLIB Object type . . . . . . . . . > *PSFCFG *DTAARA, *DTAQ, *FILE... Data transform program . . . . . *NONE Name, *SAME, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB User-defined driver program . . *NONE Name, *SAME, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Text 'description' . . . . . . . '9.28.252.110' Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 4. Fonts 109 However, if the font does not have the required structured fields present, these tools have no effect. Details of the fonts that may be captured are: • Outline Fonts (single and double-byte): These include AFP outline fonts shipped with the IBM AFP Font Collection and are already marked. If Type Transformer is used to create new outline fonts, the option to mark them is user-selectable. • Raster Fonts (single byte) Some of the newer fonts in the IBM Font Collection are marked. Earlier fonts may not be marked if they do not contain the necessary information as described above. If the user attempts to mark these fonts, a warning message is issued. • Raster Fonts (double-byte): These fonts contain the necessary information to enable them to be marked. Note: A raster font is actually built from two font resources: the font character set and the code page. Therefore, both of these resources must be marked if they are to be eligible for capture. 4.10.2 Marking a font resource An OS/400 font resource may be a font character set, code page, or coded font. These are OS/400 objects with the attribute of FNTCHRSET, CDEPAG, or CDEFNT. Note that a coded font cannot be marked for capture. Use the WRKFNTRSC command to quickly locate font resources on your system and DSPFNTRSCA to identify whether they have been marked (FNTCAPTURE *YES). Displaying the font attributes also tells you the pel density of the font character set as shown in Figure 82. Figure 82. Displaying attributes of a font resource on a V4R2 system Remember that FNTCAPTURE *YES means that the font is eligible for capture, not that it has been captured by the printer. When creating OS/400 font resources from resources sent from other systems, use the CRTFNTRSC command. This command and the new CHGFNTRSC command now allow a user to mark the font as eligible for capture. This is done by entering the following command: Display Font Resource Attributes System: ALICEH02 Font Resource . . . . . . . . . . . : C0H200A0 Library . . . . . . . . . . . . . : QFNT300LA1 Object attribute . . . . . . . . . . : FNTCHRSET Pel Density . . . . . . . . . . . . : 300 Font Capture . . . . . . . . . . . . : *YES Date . . . . . . . . . . . . . . . . : 12/16/94 Time . . . . . . . . . . . . . . . . : 00:00:00.00 Text . . . . . . . . . . . . . . . . : HELVETICA LATIN1-ROMAN MED 11-PT Press Enter to continue. F3=Exit F12=Cancel 110 IBM AS/400 Printing V CHGFNTRSC FNTRSC(QFNTCPL/C0D0GT13) FNTCAPTURE(*YES) This causes the current date and time stamp to be added to the font, which is what PSF/400 uses to track whether the font in the printer is truly the same as the one being referenced in the spooled file (just having the same object and library name is not enough). The default for FNTCAPTURE is *NO. The CRTFNTRSC command has an additional keyword *FILE. This tells PSF/400 to use the font capture information stored within the font. If no information is found, then *NO is assumed. This allows users to mark fonts on other systems (for example, using APSRMARK) and then send them for use on the AS/400 system. 4.10.3 Defining the printer for font capture In addition to defining the font resources, the user must define the printer as being capable of font capturing. This is done by modifying the printer's PSF Configuration Object. The keyword is FNTCAPTURE and the options are *YES or *NO. This permits the user to selectively define which printers will support font capturing. At the time this redbook was written, only the IBM AFCCU printers are capable of using the font capture facility. 4.10.4 Considerations for font capture Note: You must be authorized to use a font resource, regardless of whether it has been captured in the printer. This is because some fonts might be security sensitive (for example, a Magnetic Ink Character Recognition (MICR) font used for printing checks or a font representing someone's signature). Therefore, exercise caution when marking such fonts, because many printers today can be accessed from more than one system. Captured fonts remain in the printer indefinitely unless overwritten by later font capture instructions. The host printer writer cannot alter this condition. If a concern exists about font resources from user libraries being captured and “polluting” the printer font resources, there are several actions you can take to guard against this: • Change the USRRSCLIBL parameter in the PSFCFG object to *NONE. This means that user libraries are not searched for resources. • Run the CHGFNTRSC command against any fonts in user libraries specifying FNTCAPTURE(*NO). • Suppress font capturing altogether by setting FNTCAPTURE to *NO in the PSFCFG object. 4.11 Creating AFP fonts with Type Transformer Type Transformer is a Windows-based PC tool that can be used to create AFP fonts for the AS/400 system. All the source Type 1 fonts used to build the AFP Font collection are supplied or you can use your own Type 1 fonts. Here is an example of building single-byte fonts and moving them to the AS/400 host. Chapter 4. Fonts 111 A single byte AFP font can be created in five steps: 1. Select the Output Font resolution. Valid options are any combination of AFP Outline, 240-pel raster, and 300-pel raster. In the following example, AFP Outline and 300-pel raster fonts are created. 2. Select the icon to choose the Type 1 (Typefaces) to be converted to AFP Fonts (Figure 83). Any directory that has valid Adobe Type 1 outline fonts (*.pfb extension) will have its typeface displayed. You can create Adobe Type 1 outline fonts from TrueType fonts with the FontLab editor supplied in this package. Figure 83. Type Transformer: T1 icon selection Highlight the typefaces, and click OK. You can choose one or several typefaces to convert as long as the *.pfb files reside in the same directory (Figure 84 on page 112). 112 IBM AS/400 Printing V Figure 84. Selecting multiple typefaces 3. If you are creating raster fonts or coded fonts, select the icon to choose the point sizes to be used (Figure 85). Figure 85. Selecting a point size You can select one or multiple point sizes by highlighting the point sizes to be used and clicking Add (Figure 86). There is an option to create fractional point sizes. If you choose this option, you are required to complete the character set name or the coded font name. More information is provided in the Type Transformer User’s Guide that comes online with this product. Chapter 4. Fonts 113 Figure 86. Choosing multiple point sizes 4. Choose a filter by clicking the icon to reduce unnecessary characters (Figure 87). This is an optional step, but it may help keep the size of the AFP font to a minimum. More information on character lists can be found in the Type Transformer User’s Guide. Figure 87. Choosing a filter Select the character list, and click Open (Figure 88 on page 114). 114 IBM AS/400 Printing V Figure 88. Select Character Lists 5. Start the job by clicking the icon (Figure 89). Figure 89. Starting the job Give the job a name (up to eight characters) and a description. Select the type of reports to generate, and click Transform (Figure 90). Chapter 4. Fonts 115 Figure 90. Start Job There are several additional options that you can use to customize the AFP output fonts: • You can define coded fonts using the icon. • You can rename the coded fonts using the icon. • You can customize output typeface names using the icon. • You can customize character set font names using the icon. Once the font conversion job is complete, store the fonts on the AS/400 system using the icon (Figure 91). Figure 91. Storing converted fonts You can select the output fonts to store from the window shown in Figure 92 on page 116. Choose the platform, highlight the font objects to store, and click Store. 116 IBM AS/400 Printing V Figure 92. Select store destination Personal Communications V4.3 (or higher) or Client Access/400 and Object Rexx for Windows is required to use this store function. Select the session ID where your AS/400 system is logged on. Provide the system name, select the output libraries, and provide a user ID and a password. Type Transformer stores the font resources on your AS/400 host (Figure 93). Figure 93. Storing fonts on the AS/400 host © Copyright IBM Corp. 2000 117 Chapter 5. The IBM AFP Printer Driver The IBM Advanced Function Presentation Printer (AFP) Driver is a printer driver used to produce AFP output from PC applications. This means it can be used for printing PC documents on high-speed AFP system printers, produce electronic forms using your favorite PC application, and even create signatures and logos from existing or newly-scanned sources. The driver is included with Client Access/400 or may be downloaded from the World Wide Web free of charge. 5.1 Overview The AFP Printer Driver is supported in the following environments: • Windows 3.1 • Windows for Workgroups 3.11 • WIN OS/2 • Windows 95 • Windows 98 • Windows 2000 • Windows NT The AFP drivers are similar to standard PC drivers in that they are small in size, fit on a standard diskette, and are installed in the normal manner (for example, through the Windows Control Panel). They differ from normal printer drivers in that the output is Advanced Function Presentation Data Stream (AFPDS) instead of the more usual Printer Control Language (PCL), Personal Printer Data Stream (PPDS), PostScript, and others. You “print” the output to a port or file the same as any Windows printer drivers. 5.1.1 Why use the AFP Printer Driver The AFP Printer Driver offers a variety of functions to optimize your output: • Overlays: Creating overlays (electronic forms) with the AFP Printer Driver means you can use your existing PC application to design a form and are limited only by the capabilities of that application. You can use advanced desktop processing features, such as curved boxes and shading together with basic functions such as text alignment and spell checking. Company letterhead, terms and conditions, or an invoice layout are common examples. • Page segments: If you already have your company or client's logos in PC format, you can include these in overlays, or perhaps create them as a separate AFP resource called a page segment. Signatures are another candidate. Captured at a PC-attached scanner, they can be imported into a PC application and then “printed” as an AFP page segment. Individual page segments representing user's signatures can then be printed along with the letterhead overlay. • AFP documents: Using the AFP Printer Driver with Client Access/400 network printing, you can send your PC documents for printing on a high-speed AS/400 AFP system printer instead of overloading your desktop PC printer. 118 IBM AS/400 Printing V 5.2 Installing the AFP Printer Driver The following instructions use Client Access/400 for Windows 95/NT V3R1M2 as an example. They assume that Client Access/400 is already installed without the AFP Printer Driver installed. 1. This procedure requires that you re-boot your PC. End all other applications before beginning this process. 2. Open the Client Access folder and then the Accessories folder. 3. Double-click the Selective Setup icon. 4. At the Install Client Access - Component Selection window, select the Printer drivers checkbox (Figure 94), and click Change. Figure 94. Client Access/400 Component Selection display 5. Select the AFP printer driver (Figure 95). Figure 95. Installing the Client Access/400 printer drivers Chapter 5. The IBM AFP Printer Driver 119 6. Click Continue. 7. Click Next. 8. When you are satisfied with the settings, click Next. The installation begins, taking a few moments to load the driver. 9. At this point, you may choose to view the README.TXT file. 10.Select the option to re-boot your PC at this time. 11.When your PC has restarted, a Welcome to Client Access window is displayed. Close this window, and open the Printers folder. 12.Click Add Printer. 13.Select the Local printer radio button (Figure 96). Figure 96. Selecting a local printer driver 14.At the Manufacturer and Printer window, select IBM and an AFP driver that is appropriate for your environment (Figure 97). Figure 97. Manufacturer and printer window 120 IBM AS/400 Printing V The available drivers and their uses are: • IBM AFP 144: Generic AFP driver for impact printers. Use this driver for creating AFPDS output at approximately 144 dpi. This is used only for printing to IPDS impact printers such as certain IBM 6400, 4247, and 4230 models. • IBM AFP 600: Generic AFP driver for any IPDS laser printer at 300 or 600 dpi. • IBM AFP 3160: Creates 240-pel output for the 3160 Model 1 printer. • IBM AFP 240 (Microsoft): Generic 240 dpi AFP driver for 32-bit Windows systems (Windows 95). • IBM AFP 240 (Windows 3.x drivers): Generic 240 dpi AFP driver for 16-bit Windows systems such as Windows 3.1, Windows for Workgroups, and WIN-OS/2. • IBM AFP 300 (Microsoft): Generic 300 dpi AFP driver for 32-bit Windows systems (Windows 95). • IBM AFP 300 (Windows 3.x drivers): Generic 300 dpi AFP driver for 16-bit Windows systems such as Windows 3.1, Windows for Workgroups, and WIN-OS/2. • IBM AFP xxxx (Microsoft): Printer-specific AFP driver for 32-bit Windows systems (Windows 95). • IBM AFP xxxx (Windows 3.x drivers): Printer-specific AFP driver for 16-bit Windows systems such as Windows 3.1, Windows for Workgroups, and WIN-OS/2. • IBM AFP Facsimile Support/400: Specific driver for use with Facsimile Support/400. This AFP driver is used for faxing PC documents with the Facsimile Support/400 program product. It has support for Image only. • IBM AFP WPM/2: Specific driver for ImagePlus Workstation Program/2. This AFP driver is used for producing AFP output from the IWPM/2 product. It also has support for Image only. You can install more than one AFP print driver just as you can install multiple printer drivers for one physical printer (for example, PCL and Postscript drivers). 15.The next window shows a list of the ports on your PC. Select FILE (Figure 98) for creating overlays and page segments or a printer port to print documents. Chapter 5. The IBM AFP Printer Driver 121 Figure 98. Connecting the printer driver to a port 16.Leave the next window with the defaults for printer name and “no” for the default Windows printer. 17.Change the invitation to print a test page to No. 18.Click Finish. Then a new printer icon is created (Figure 99). Figure 99. Completed Add Printer process The driver is now ready for use with your PC applications. To learn how to do this, refer to 5.3, “Creating an overlay” on page 122, or 5.4, “Creating a page segment” on page 126. 5.2.1 Installation from the World Wide Web The latest version of the AFP Printer Driver may be obtained from the World Wide Web at: http://www.printers.ibm.com/afpdr.html However, only the version supplied with the IBM program products are licensed and, therefore, supported by IBM. Copies of the driver from the Web are supplied “as is”. 122 IBM AS/400 Printing V To add the AFP Printer Driver, download the installation program to a convenient directory (C:\TEMP, for example). Then perform the following steps: 1. Create a destination directory to receive the files (C:\AFPDRVR in this example). Ensure the directory name has no more than eight letters in its title, or the driver files will not be unpacked. 2. Click Start->Run and enter the string shown in Figure 100. Be sure to include the “/D” option to create any sub-directories that may be needed. You can specify the destination directory to be a diskette or a network drive if required. 3. After you unpack the files, the procedure to install the driver is the same as already described (from step 11 in 5.2, “Installing the AFP Printer Driver” on page 118). However, in this case, at the Manufacturer and Printer window, you need to select Have Disk. Then indicate the drive and directory where you unpacked the files. Figure 100. Running the AFP Printer Driver installation program 5.3 Creating an overlay The following steps show you how to set up the driver for producing overlays (electronic forms). You can perform this process globally for Windows 95 or when you select the driver from your PC application (through the Properties button). 1. Open the Printers window, and select the AFP Printer Driver icon. Right-click, and select Properties. 2. Select the Details tab, and then select Setup. The display shown in Figure 101 appears. Chapter 5. The IBM AFP Printer Driver 123 Figure 101. AFP Printer Driver Setup a. If the Code page box is empty, click Defaults, and the T1001004 code page (Personal Computer: Desktop Publishing) is added. b. Change Paper Size as required (for example, A4 or Letter). c. Check that the Image Resolution dialog box matches that of your target printer. If you are using a specific driver for your printer model, this box is grayed out as shown in Figure 101. Note: Most desktop laser printers, including the IBM Network Printers, use AFP resources at 300 dpi even if they subsequently print the output at 600 dpi. If you are unsure of your printer's resolution, refer to the tables in Appendix E, “Printer summary” on page 313. d. Leave the Orientation dialog box at its default (Portrait), unless you want to print documents in landscape. This setting is overridden by the application's Page Setup (or similar), and is only used for applications that do not have page setup control such as Microsoft Paintbrush. e. Leave the Compressed Images parameter selected. If you use an AFP Print Driver for an older IBM printer, such as an IBM 3820, this parameter is not selectable. 3. Click Options... to display the window shown in Figure 102 on page 124. 124 IBM AS/400 Printing V Figure 102. AFP Printer Driver setup: Options—Overlay 4. Change the Output Type to Overlay (not Medium overlay). a. Select the Clip limits option, leave the Clip Method as Offset plus size, and change Top and Left to “0” (Figure 103). b. The values for Height and Width are in proportion to the paper size you defined earlier but may be changed if required. Figure 103. AFP Printer Driver setup options: Clip limits c. Select OK to save these settings and return to the Options window. 5. For now, no changes are required to the Fonts window. This is discussed further in 5.5, “Text versus image” on page 129. 6. No changes are required to the Images window. 7. Click OK to save these settings and return to the main page of the AFP Printer Driver Setup. 8. Click OK to close the Setup window. 9. Click OK to close the Printer Properties window. Chapter 5. The IBM AFP Printer Driver 125 To use the AFP Printer Driver with your PC application, simply select the print command (sometimes a separate printer setup is available). 10.Select the required AFP Printer Driver. 11.Click Properties to check or change your output type or if you want to change any of the settings. 12.When you confirm the print operation, a Print To File dialog box is shown with a default directory location. If you have shared folders support (that is, your AS/400 disks are mapped to your PC as local disks), you can print directly to a convenient shared folder as shown in Figure 104. You can give the file any name you want, but a good convention is to use the suffix .OLY (for Overlay). Figure 104. Print to File on Shared Folder Note: In this example, we assume that the i:\ drive is assigned to QDLS. If you do not have shared folders support, you need to file transfer the AFP file using another method such as Client Access/400 file transfer or File Transfer Protocol (FTP) if your PC and AS/400 system are using TCP/IP. The latter method is described in 5.6.2, “File transfer of AFP resources using FTP” on page 130. 13.As a one-time step, you must create a physical file on the AS/400 system to receive the resource. Use the following command: CRTPF FILE(SIMON/UPLOAD) RCDLEN(32766) TEXT('File for transfer for AFP resources') LVLCHK(*NO) 14.Copy your AFP file from the folder into the physical file as shown here: CPYFRMPCD FROMFLR(ITSODIR) TOFILE(SIMON/UPLOAD) FROMDOC(INVOICE.OLY) TRNTBL(*NONE) In the preceding example, we copied an overlay (INVOICE.OLY) created using the AFP Printer Driver from a shared folder (ITSODIR) to a physical file (UPLOAD). If you used Client Access file transfer or FTP, the object is already in the physical file. 15.Create the OS/400 AFP resource: CRTOVL OLY(SIMON/INVOICE) FILE(SIMON/UPLOAD) MBR(UPLOAD) TEXT('Coffee Shop Invoice') 126 IBM AS/400 Printing V This is now an AFP resource that may be used with your applications as described in Chapter 3, “Enhancing your output” on page 67. It is an OS/400 object with an object type of *OVL. Note: Steps 13, 14, and 15 have been automated into an OVERLAY command provided with the AS/400 Programming Sampler available from the AS/400 printing Web site, which is: http://www.ibm.com/printers Select Resources for AS/400, and click Downloads/freetools. 5.4 Creating a page segment The following steps show you how to set up the driver for producing page segments (AFP images such as graphics, logos, and signatures). You can perform this process globally for Windows 95 or when you select the driver from your PC application (through the Properties button). 1. Follow the process described in 5.3, “Creating an overlay” on page 122, up to step 3 (“Click on Options”). 2. Change the output type to Page Segment. 3. The only advanced options available are Clip limits and Images. Select the Clip limits dialog box, and leave the Clip Method as Offset plus size. The next step depends on the image you want to create as a page segment. For an image that occupies most or all of the page, leave the Top/Left and Width/Height settings at their defaults. If you are producing a company logo or signature, which typically occupies a small area of the page, you can: a. Place your logo in the top left-hand area of the page. b. In the AFP Printer Driver Setup, change Paper Size to User Defined (for example, 2 inches wide by 1.5 inches deep). See Figure 105. Figure 105. Changing the User Defined Paper Size This reduces the amount of surrounding white space you capture with the page segment and makes positioning it easier. See Figure 106 for an example. Chapter 5. The IBM AFP Printer Driver 127 Figure 106. Clip art logo in a PC application Alternatively, you can use this method: a. At the Advanced Clip Limits window, enter the coordinates of the top left-hand corner of the logo (or area you want to capture) in Top and Left. b. Change Width and Height as required (for example, 2 inches by 1.5 inches). This latter method does not work with some newer applications such as Lotus Freelance. A third method is to import the logo into Microsoft Paintbrush (Windows 3.x only) and select the Partial option from the print command (that is, print only the area of your drawing that you specify). To use the AFP driver with your PC application, simply select the print command (sometimes a separate printer setup is available). 4. Select the required AFP Printer Driver. 5. Click Properties to check or change your output type or if you want to change any of the settings. 6. When you confirm the print operation, a “Print to File” dialog box is shown with a default directory location. If you have shared folders support (that is, your AS/400 disks are mapped to your PC as local disks), you can print directly to a convenient shared folder as shown in Figure 107 on page 128. You can give the file any name you want, but a good convention is to use the suffix .PSG (for Page Segment). 128 IBM AS/400 Printing V Figure 107. Print to File on a shared folder Note: In this example, we assume that the i:\ drive is assigned to QDLS. If you do not have shared folders support, you need to file transfer the AFP file using another method such as Client Access/400 file transfer or FTP if your PC and AS/400 system are using TCP/IP. The latter method is described in 5.6.2, “File transfer of AFP resources using FTP” on page 130. 7. As a one-time step, you must create a physical file on the AS/400 system to receive the resource. Use the following command: CRTPF FILE(SIMON/UPLOAD) RCDLEN(32766) TEXT('File for transfer for AFP resources') LVLCHK(*NO) 8. Copy your AFP file from the folder into this physical file: CPYFRMPCD FROMFLR(ITSODIR) TOFILE(SIMON/UPLOAD) FROMDOC(CUP.PSG) TRNTBL(*NONE) In the preceding example, we copied a page segment (CUP.PSG) created using the AFP Printer Driver from a shared folder (ITSODIR) to a physical file (UPLOAD). If you used Client Access file transfer or FTP, the object is already in the physical file. 9. Create the OS/400 AFP resource: CRTPAGSEG PAGSEG(SIMON/CUP) FILE(SIMON/UPLOAD) MBR(UPLOAD) TEXT('Logo - coffee cup') This is now an AFP resource that may be used with your applications as described in Chapter 2, “Advanced Function Presentation” on page 35. It is an OS/400 object with an object type of *PAGSEG. Note: Steps 7, 8, and 9 have been automated into a “SEGMENT” command provided with the AS/400 Programming Sampler available from the AS/400 printing Web site at: http://www.printers.ibm.com/products.html Then, select AS/400 application coding sample. Chapter 5. The IBM AFP Printer Driver 129 5.5 Text versus image Most versions of the AFP Printer Driver allow you to print text as text rather than as image. This is the default setting controlled by the Fonts option in Advanced Options from the main Setup window. This means that text in your PC document is produced as text wherever possible with graphics and shading being produced as image. It is more efficient to produce text-based overlays and documents in terms of both the file size and the speed of printing. For example, a standard business overlay created as image at 300 pel resolution might be 100K in size. That same overlay utilizing text may be less that 5K. You need to install the IBM AFP Font Collection (described in 4.3, “Which fonts are available” on page 93) on the AS/400 system where you intend to print the AFP resources. This is necessary to provide the PC code page used and to provide the AFP character sets. The latter are resolution-dependent (that is, 240 or 300-pel). Code pages are not resolution-dependent. The PC code page used with the AFP Printer Driver is located in library QFNTCDEPAG after installation of the IBM AFP Font Collection. The driver produces text by mapping common PC fonts such as Arial and Times New Roman onto host AFP equivalents, or near-equivalents such as Helvetica and Times New Roman. The font table (IBMAFP.INI) is installed with the other driver files in the WINDOWS\SYSTEM directory. You can observe these mappings by clicking on a PC font together with the point size and style (Figure 108). Figure 108. Advanced Font Options: Font substitution If required, you can add your own mappings. For example, you can map Arial Narrow to the same Helvetica host character set or even a different host font altogether using the Add button. Changes may be made using the Modify and Delete buttons. These changes are recorded in another table, PENNUSER.INI, located in the \WINDOWS directory. 130 IBM AS/400 Printing V You must ensure that all these fonts are available at print time and in the correct resolution (240 or 300-pel). Newer versions of the driver also allow the use of outline fonts. With outline fonts, you only need to specify the typeface (for example, CZH400 for Helvetica Bold) and all point sizes are mapped to it. Outline fonts are described in 4.5, “Outline fonts” on page 99. The choice of whether to use text or image is made at the Advanced Options - Fonts dialog box for overlays and documents only. The Use Substitution Table checkbox is selected by default. This means your output will use AFP fonts where possible and image (raster) elsewhere. If you want the entire document printed as Image, de-select the checkbox. You can also experiment with Use text rules. This draws lines as text instead of as an image. Note: Some versions of the driver (for example, Windows NT and earlier versions of the Windows 3.x drivers) do not support text output. Therefore, these drivers do not have the Fonts option available. Using the AFP Viewer, you can check how your document is being produced (text, image, or both). See 5.6.3.1, “Using the AFP Viewer” on page 132. 5.6 Other AFP Printer Driver tasks This section looks at customizing the AFP Printer Driver further, describes other file transfer methods for transferring the AFPDS output to the AS/400 system, and discusses some common problems. 5.6.1 Using the Images dialog box Do not confuse this with printing the document in text or image. If any part of your document uses image, you can control its appearance by selecting the Images option and then one of four gray scale methods. You can also adjust the intensity and contrast controls. How much effect you see depends on the quality and capabilities of your printer. These options are documented in the online help. 5.6.2 File transfer of AFP resources using FTP If you do not have support for shared folders to directly print the AFP file to the AS/400 system, you may want to use TCP/IP file transfer using File Transfer Protocol (FTP) as described here. Both your PC and the AS/400 system must be using TCP/IP, and the FTP daemon must be running on the AS/400 system. Open a DOS Window, and refer to the example in Figure 109. Chapter 5. The IBM AFP Printer Driver 131 Figure 109. FTP session to transfer overlay resource Note: There is no need to create a physical file on the AS/400 system first. However, this method will overwrite the member in the file if it already exists. 5.6.3 Problem solving A good source of commonly-experienced problems is the README file included with the driver (true for any product, but especially so for this one). Some of the more common problems and answers are: • When installing the AFP Printer Driver on Windows 3.x, why is a dialog box displayed prompting me to insert a diskette with Serif fonts? C:\>ftp lucyh01 1 Connected to lucyh01.systland.ibm.com. 220-QTCP at lucyh01.systland.ibm.com. 220 Connection will close if idle more than 60 minutes. User (lucyh01.systland.ibm.com:(none)): simonh 2 331 Enter password. Password: 3 230 USERID24 logged on. ftp> bin 4 200 Representation type is binary IMAGE. ftp> lcd temp 5 Local directory now C:\temp ftp> cd simon 6 250 Current library changed to SIMON. ftp> put test.oly 7 200 PORT subcommand request successful. 150 Sending file to member OLY in file TEST in library SIMON. 250 File transfer completed successfully. 1118 bytes sent in 0.00 seconds (1118000.00 Kbytes/sec) ftp> quit 8 221 QUIT subcommand received. C:\> The steps shown in Figure 109 are explained here: 1 The FTP command to the TCP/IP name of your host system (you can use the IP address of the system instead). 2 Normal OS/400 user ID. 3 Normal password of your user ID. 4 This specifies a binary file transfer (not ASCII). 5 Change to the local (PC) directory where the AFP file is stored. You can type a different drive letter and subdirectory if appropriate (for example, D:\TEST\OVLS). 6 Change directory on the AS/400 system (actually changing the current library). 7 This copies the AFP file from the PC to the AS/400 system. 8 Type this to exit FTP. Notes 132 IBM AS/400 Printing V Answer: Ignore this dialog box. Select Cancel in the dialog box, and the installation will complete successfully. • How do I know which version of the AFP Printer Driver I am using? Answer: From the AFP Printer Driver's Setup window, click About. The version is similar to the IBM AFP Printer Driver for Windows Version 4.22. The same applies for a driver from the World Wide Web or IBM AFP Driver for Windows, Version 4.12 for the Client Access/400 version. • When I print an AFP document or spooled file using an AFP resource where these have been created by the AFP driver, I get a message, “Code page T1001004 was not found”. Answer: If you are using text instead of image, you need this PC ANSI code page on the AS/400 system. See 5.5, “Text versus image” on page 129. 5.6.3.1 Using the AFP Viewer Details on the use of the AFP Viewer can be found in several sources, including: • Client Access for Windows • AS/400 Guide to Advanced Function Presentation and Print Services Facility, S544-5319 The AFP Viewer can be a useful tool for diagnosing problems. For example, you can invoke the AFP Viewer to examine it using the overlay presented in Chapter 3, “Enhancing your output” on page 67. Follow these steps: 1. Open the Client Access folder, then the Accessories folder. 2. Double-click the AFP Workbench Viewer icon. 3. Select File->Open, and locate the file name of the overlay (CAFE.OLY, in this example). The resulting window is shown in Figure 110. Chapter 5. The IBM AFP Printer Driver 133 Figure 110. AFP overlay viewed using the AFP Viewer If you click Options->Image View-> Color->and your favorite color, the AFP output is displayed in this color where the output is represented by image as opposed to a screen font. In this case, all the text in the overlay appears in black (text) and the logo and boxes in red (image). This is useful for tracking performance problems with documents or resources created using the AFP printer driver (for example, when your all-text document has actually been created as image instead of more efficient text). 134 IBM AS/400 Printing V 5.6.4 Performance of the AFP Printer Driver The most important factor in the performance of the driver is whether it is produced in text or image and has already been discussed. Other factors that help maintain or improve performance are: • Crop page segments (so you do not “print” the rest of the page as white space). • Avoid excessive use of shading. • Draw square boxes, rectangles, and so on rather than rounded boxes. It may be possible for the driver to print the former as text rules. 5.6.5 Creating AFP documents The following steps take you through a one-time process to set up the driver for producing AFP versions of PC documents (for example, letters or reports produced using Lotus Word Pro or Microsoft Word, presentations using Lotus Freelance Graphics, and spread sheets from Microsoft Excel). These are just a few typical applications. As long as you are using a Windows or OS/2 application with a graphical user interface, you can “print” your output using the AFP printer driver. You can perform this process globally for Windows 95 or when you select the driver from your PC application (through the Properties button). 1. Follow the process described in 5.3, “Creating an overlay” on page 122, up to step 3 (“Click Options”). 2. Change the output type to Document. Leave the Output Type option at the default. 3. Click Form Definitions and then click Modify... (Figure 111). Figure 111. AFP Printer Driver setup: Options—Document a. If you want to specify duplex printing and use a different drawer, select the Create inline form definition checkbox and the other options as required. You can also specify an AFP overlay to be printed with your document, but you must ensure it is available as a separate resource on the system from which you want to print. In the example shown in Figure 112, we specified simplex printing from Drawer 2. Chapter 5. The IBM AFP Printer Driver 135 Figure 112. Selecting an inline form definition b. Click OK to save these settings, and return to the Options window. 4. Select OK to save these settings, and return to the main page of the AFP Printer Driver setup. 5. Click OK to close the Setup window. 6. Click OK to close the Printer Properties window. The driver is now set up to produce AFP versions of your PC documents. To configure Client Access/400 Network Printing, see 9.2, “Client Access/400 Network Printing” on page 186. 136 IBM AS/400 Printing V © Copyright IBM Corp. 2000 137 Chapter 6. Host print transform This chapter describes how the host print transform function can be used to convert SCS and AFPDS spooled files into an ASCII printer data stream. Host print transform has been available on the AS/400 system since Version 2.0 Release 3.0. New capabilities have been added in the versions and releases that have followed. 6.1 Host print transform overview The host print transform function allows SCS-to-ASCII and AFPDS-to-ASCII conversion to take place on the AS/400 system instead of by 5250 emulators. Having the conversion take place on the AS/400 system provides the following advantages: • Consistent output for most ASCII printers: The host print transform function is capable of supporting many different types of ASCII printer data streams (for example, the Hewlett-Packard Printer Control Language (PCL), the IBM Personal Printer Data Stream (PPDS), and the Epson FX and LQ data streams). Having the conversion done on the AS/400 system ensures that the resultant ASCII printer data stream provides the same printed output regardless of the emulator or device to which the printer is physically attached. • Support for many different ASCII printers: Currently, each emulator supports a limited number of ASCII printers. With the host print transform function, most IBM printers and a large number of OEM printers are supported. • Customized printer support: Workstation customizing objects that come with the host print transform function can be updated by the user to change or add characteristics to a particular printer. Also, if the host print transform function does not have a workstation customizing object for a printer you want to use, you can create your own. Figure 113 on page 138 shows an overview of some of the ways in which ASCII printers can be attached. Host print transform can be used to print to all of these printers. 138 IBM AS/400 Printing V Figure 113. Host print transform overview ASCII printers can be attached to displays, PCs, or directly to a LAN. For detailed information on printer attachment methods, see 1.7.7, “Printer attachment methods” on page 32. Host print transform is also used with the remote system printing function (LPR/LPD). For more information, see Chapter 8, “Remote system printing” on page 171. Finally, Facsimile Support/400 uses the host print transform when the fax controller used is an IBM 7852-400 ECS/Data Fax modem. For host print transform considerations on performance, recoverability, fidelity, and currency, see 1.7.8.1, “PSF/400 IPDS printers versus HPT ASCII printers” on page 32. 6.2 Host print transform enhancements The host print transform function continues to be enhanced either by PTFs or in new versions or releases of OS/400. Host print transform includes the following enhancements in V3R1 and later: • AFPDS to ASCII transform and AFPDS to TIFF format transform; support for text, image, and barcode commands. • New and enhanced tags for WSCST; new data streams supported. • New API QWPZHPTR brings the capabilities of the host print transform to the AS/400 application developers. • New manufacturer type and model special values are added continuously by PTFs as part of the base code. Modem InfoWindow Server PC LPR PJL Lexlink Fax IBM Network Station LAN ASCII Printers Chapter 6. Host print transform 139 • Support DBCS printing; both the SCS to ASCII and the AFPDS to ASCII transform are supported (V3R2 and V3R7 and later). • Image scaling enhancement; with this enhancement, Facsimile Support/400 received faxes are printed at the correct size. • New barcodes, Royal Mail, and Japan Postal are now supported (V4R2). Note: All the enhancements provided by PTFs are already available and are part of PTF cumulative tapes. 6.3 Host print transform process SCS or AFPDS spooled files can be converted to an ASCII printer data stream and printed on ASCII printers. The host print transform converts the SCS data stream or the AFPDS data stream just before it is sent to the ASCII printer. The AS/400 spooled file contains SCS data or AFPDS data, not the converted ASCII data. Note: IPDS spooled files cannot be converted by the host print transform. AFP resources (such as fonts, overlays, page segments) referenced in AFPDS spooled files are converted into an ASCII printer data stream and passed to the ASCII printer. Figure 114 shows the host print transform process. Figure 114. Host print transform process The host print transform function generates an ASCII printer data stream for a number of IBM and non-IBM printers. To generate the different ASCII data streams, the host print transform function uses AS/400 system objects that describe characteristics of a particular printer. These objects are named Work Station Customizing Objects (WSCST) and you can customize them. The host print transform API QWPZHPTR invokes the SCS transform or AFPDS transform according to the data stream type (printer attributes). This API brings the capabilities of host print transform to the AS/400 application developer. Host Print Transform Spool Application Work Station Customizing Object (WSCST) Image Print Transform (V4R2) QWPZHPTR API AFP Resources Printer File ASCII Printer DEVTYPE *SCS or *AFPDS ASCII 140 IBM AS/400 Printing V In Version 4.0 Release 2.0, if the image print transform function is enabled, host print transform calls it for USERASCII spooled files. If the USERASCII spooled file contains Tag Image Format (TIFF), Graphics Interchange Format (GIF), OS/2 and Windows bitmap (BMP), or PostScript Level 1 data streams, it is processed by the image print transform. For detailed information on the image print transform function, see Chapter 7, “Image print transform” on page 161. 6.4 Enabling host print transform To enable the host print transform function, you must change the printer device description, or if you are using remote system printing, change the output queue description. The following parameters are used by the host print transform function: TRANSFORM Host print transform function MFRTYPMDL Manufacturer, Type and Model PPRSRC1 Paper source 1 PPRSRC2 Paper source 2 ENVELOPE Envelope source ASCII899 ASCII code page 899 support (symbols code page) WSCST Workstation customizing object and library Host print transform is enabled when you specify *YES for the TRANSFORM parameter in the printer device description, or if you are using remote system printing, it is enabled in the output queue description. Note: Client Access for Windows 95/NT creates or changes the printer device description based on the printer's session configuration. The host print transform function should be enabled by changing the session configuration on the personal computer and not the device description in the AS/400 system. For detailed information, see Chapter 9, “Client Access/400 printing” on page 185. The host print transform function is also available when using remote system printing with CNNTYPE(*IP) or (*IPX) and the Send TCP/IP Spooled File (SNDTCPSPLF) command. • For remote system printing, the TRANSFORM, MFRTYPMDL, and WSCST parameters are part of the Create Output Queue (CRTOUTQ) command and Change Output Queue (CHGOUTQ) command. • The SNDTCPSPLF command includes the TRANSFORM, MFRTYPMDL, and WSCST parameters. The same WSCST object works for both the AFPDS to ASCII transform and the SCS to ASCII transform. 6.5 SCS to ASCII transform The SNA Character String (SCS) data stream is a text-only data stream used for such items as job logs and general listings. The SCS to ASCII portion of the host print transform function provides 3812 SCS printer emulation. That means it supports page printer functions such as orientation and Computer Output Reduction (COR). Chapter 6. Host print transform 141 SCS to ASCII transform works by mapping commands in the SCS data stream to similar commands in the ASCII printer data stream. It does not support converting the data stream to an image the same way the AFP to ASCII transform does in raster mode. Host print transform has the ability to process an IOCA image embedded in the SCS data stream. This is done by OfficeVision/400 with the graphic instruction. The target printer must be a laser printer supporting the PPDS or PCL data streams. Note: The OV/400 graphic instruction allows you to embed an IOCA (image) or GOCA (graphic) object into the SCS data stream. Only IOCA objects are supported by the host print transform function. Overlays referenced in the printer file, either for an application or for OfficeVision/400, are not supported by the SCS to ASCII transform. Note: If the printer file device type is changed to *AFPDS, the spooled file created by the application is AFPDS. The overlays referenced in the printer file (front overlay and back overlay) will be handled by the AFP to PCL transform. Almost all ASCII page printers have an unprintable border around the page where data cannot be printed. The SCS to ASCII transform function can compensate for the no-print borders. This is demonstrated in Figure 115. Figure 115. NOPRTBDR tag example LINE 7 LINE 7 Unprintable border Unprintable border 1 2 3 142 IBM AS/400 Printing V This works the same for the other NOPRTBDR tags (bottom, left, and right). The value specified is always a correction. Note: The no-print border values in the WSCST object cannot be used to position or format your output. Depending on the unprintable border size, no correction is possible if your print output starts at line 1, 2, or 3. 6.6 AFPDS to ASCII transform AFPDS to ASCII transform supports AFPDS font, text, image, and barcode commands. It can convert the AFP data stream to a number of ASCII printer data streams, but the best or premier support is to the following ASCII printer data streams: • PPDS levels 3 and 4 (IBM 4019 and 4029 laser printers) • PCL 4, 5, and 6 (IBM Network Printers, IBM 4039 laser printer, HP LaserJet, HP InkJet (in raster mode only) For other ASCII printer data streams, only the text of the AFP document is printed. Images and barcodes are not supported. If the printer does not support absolute movement, and the tags are not defined in the WSCST, the text is not positioned correctly. It is shown as one long string. AFPDS resources (overlays, page segments, fonts) referenced in AFPDS spooled file are automatically converted and passed to the ASCII printer. See 6.6.3, “Processing AFP resources” on page 148, for more information. The AFPDS to ASCII transform function was developed so that the transform always converts the AFP data stream to ASCII as well as possible. AFPDS functions that are not supported by the AFPDS to ASCII transform or cannot be converted to the ASCII printer data stream are ignored. AFPDS to ASCII transform has two methods of performing the data stream conversion: • Mapping mode: Map AFP commands to similar commands in the ASCII printer data stream. This method is available for all supported ASCII printer data streams. 1 In this example, the top margin is ½ inch. This is the equivalent of three lines. 2 In the application, the first line prints at line 7, which means a skip of six lines, or one inch. 3 The top no-print border (NOPRTBDR) tag in the host print transform WSCST object is set to 720/1440 inch (½ inch). This value (equivalent to three lines) is a correction. In this case, the NOPRTBDR value is equal to the top margin and will compensate for it. The first print line prints at line 7 as defined in the application. Notes Chapter 6. Host print transform 143 • Raster mode: Builds a raster image of the page in AS/400 memory and prints the page as an image. This method is available for PPDS, pages, and PCL data streams. Host print transform uses the mapping mode or the raster mode according to the printer data stream specified (PRTDTASTRM tag) in the Workstation Customizing object (WSCST object). To use raster mode, the PRTDTASTRM tag must be changed in the referenced WSCST object (for example, for a PCL5 printer from HPPCL5 (mapping mode) to HPPCL5I (raster mode)). See 6.8, “New and enhanced tags for WSCST objects” on page 152, for more information. AFPDS to ASCII transform does not require PSF/400 to transform and print AFPDS spooled files on ASCII printers. 6.6.1 Mapping mode Mapping mode maps AFPDS commands to similar commands in the ASCII printer data stream. This method is available for all supported ASCII printer data streams. Mapping mode provides good performance, but is limited in function on the ASCII printer. For example, you cannot print 270 degree orientation to a printer that only supports 0 and 90 degree orientations. Using mapping mode, the AFPDS to ASCII transform can convert and download AFP host resident fonts to PPDS and PCL printers. This provides font fidelity to these printers. For other ASCII data streams, only printer resident fonts can be used with mapping mode. Mapping mode: Processing AFP fonts In AFP documents, fonts and code pages can be specified as printer-resident or host-resident. Printer-resident fonts are specified by a Font Global ID (FGID), and printer-resident code pages are specified by a Code Page ID (CPID). Host-resident fonts are specified by a font character set name, and host-resident code pages are specified by a code page name. When mapping AFP fonts to ASCII fonts, the AFPDS to ASCII transform allows the user to use fonts resident on their ASCII printer or download host-resident fonts to PCL and PPDS printers. The AFPDS transform can use either the 240-pel or 300-pel version of a host-resident font. For the best results, the 300-pel version should be used. With 240-pel fonts, the character images are scaled to 300-pel. This may cause the edges of the characters to be jagged or fuzzy. Font character sets exists in the 240 pel version in the Font Compatibility Set shipped with OS/400 (library QFNTCPL in QSYS). We recommend using 300-pel fonts from the IBM Font Collection for IBM Operating Systems (5648-113). When downloading host-resident fonts to an ASCII printer, the fonts are cleared from the printer's memory at the end of the document. The host print transform function assumes the ASCII printer is a shared device, and there is no way to know what other applications will do to the printer. When an AFP document calls for a printer-resident font and code page (FGID/CPID), the AFPDS to ASCII transform performs the following steps to select a font when the transform is in mapping mode: 144 IBM AS/400 Printing V 1. Check the WSCST object to see if these values (FGID/CPID) are defined. If they are, the printer commands from the WSCST are sent to the printer to set the font and code page. 2. If the FGID is not defined in the WSCST object, an internal table in the code lists the commonly used FGIDs and their attributes. This helps in generating the ASCII printer commands to select the font. The following legend applies to the information shown in Table 13. • U = Uniformly spaced • M = Mixed pitch • T = Typographic • i = Italic • b = Bold • w = Double Wide Table 13. Commonly used FGIDs table FGID Name Type of font Attribute Point Pitch 5 Orator U 10 11 Courier U 10 12 Prestige U 10 18 Courier U i 10 38 Orator U b 10 39 Gothic U b 10 40 Gothic U 10 46 Courier U b 10 66 Gothic U 12 68 Gothic U i 12 69 Gothic U b 12 85 Courier U 12 86 Prestige u 12 87 Letter Gothic U 12 92 Courier U i 12 110 Letter Gothic U b 12 111 Prestige U b 12 112 Prestige U i 12 160 Essay M 12 162 Essay M i 12 164 Prestige M 12 173 Essay M 12 204 Matrix Gothic U 13 Chapter 6. Host print transform 145 If the FGID is not in the table and the ASCII data stream is PPDS or PCL, the transform sends the font request to the printer and lets it perform a best fit match. This is similar to what the SCS to ASCII transform does today. 3. In all other cases, the font request is ignored, and printing continues in the current font. When an AFP document calls for a host-resident code page and font character set, the AFPDS transform performs the following steps to select a font: 1. If the ASCII data stream is PPDS or PCL, the transform obtains the font resource and converts it to the proper format for printing. 221 Prestige U 15 223 Courier U 15 230 Gothic U 15 244 Courier U w 5 245 Courier U b,w 5 252 Courier U 17 253 Courier U b 17 254 Courier U 17 256 Prestige U 17 281 Gothic Text U 20 290 Gothic Text U 27 751 Sonoran Serif T 8 27* 760 Times T 6 36* 761 Times T b 12 18* 762 Times T b 10 15* 763 Times T i 12 18* 764 Times T b,i 10 21* 765 Times T b,i 12 18* 1051 Sonoran Serif T 10 21* 1056 Sonoran Serif T i 10 21* 1351 Sonoran Serif T 12 18* 1653 Sonoran Serif T b 16 13* 1803 Sonoran Serif T b 18 12* 2103 Sonoran Serif T b 24 9* * The pitch column for typographic fonts indicates the width of the space character between the printed characters. FGID Name Type of font Attribute Point Pitch 146 IBM AS/400 Printing V 2. If the ASCII data stream is not PPDS or PCL, the transform ignores the font request. Printing continues in the current font. 6.6.2 Raster mode Raster mode builds a raster image of the page in AS/400 memory and then sends the image to the printer. This method is available for PPDS, Pages, and PCL data streams. This method is slower than mapping mode, but allows: • Support of ink jet printers that require the page to be printed in order (only one pass of the page). Normally, AFP documents make multiple passes of the page (for example, an overlay is printed before the text is printed). • Font fidelity for printers to which the transform cannot download fonts. • Support of AFPDS functions not available on ASCII printers, such as multiple page orientations to a 4019 printer. Raster mode: Processing AFP fonts In AFP documents, fonts and code pages can be specified as printer-resident or host-resident. Printer-resident fonts are specified by a Font Global ID (FGID), and printer-resident code pages are specified by a Code Page ID (CPID). Host-resident fonts are specified by a font character set name, and host-resident code pages are specified by a code page name. In raster mode, only host-resident fonts can be used. The AFPDS transform can use either the 240-pel or 300-pel version of a host-resident font. For the best results, the 300-pel version should be used. With 240-pel fonts, the character images are scaled to 300 pel. This may cause the edges of the characters to be jagged or fuzzy. Font character sets exists in the 240-pel version in the Font Compatibility Set shipped with OS/400 (library QFNTCPL in QSYS). We recommend using 300-pel fonts from the IBM Font Collection for IBM Operating Systems (5648-113). When an AFP document calls for a printer-resident code page and font (CPID/FGID), the AFPDS to ASCII transform performs the following steps to select a font if the transform is in raster mode: 1. The transform looks in the spooled file library list and font libraries QFNTCPL and QFNTxx for a host-resident character set and code page. The code page name to look for is determined by converting the CPID to a four-character string and appending it to the prefix “T1V1”. The font character set name to look for is determined by looking at Table 14. Table 14. Font substitution table 2 FGID range Substituted font character set name Fonts 1 through 17 C0S0CR10 Font 18 C0S0CI10 Fonts 19 through 38 C0S0CR10 Font 39 C0D0GB10 Font 40 C0D0GT10 Fonts 41 through 45 C0S0CR10 Chapter 6. Host print transform 147 If code page 259 (Symbols) is specified, Table 14 is not used. In this case, character set C0S0SYM2 is used for fonts 0 to 65. For all other fonts, character set C0S0SYM0 is used. Font 46 C0S0CB10 Fonts 47 through 65 C0S0CR10 Fonts 66 through 68 C0D0GT12 Font 69 C0D0GB12 Fonts 70 through 91 C0S0CR12 Font 92 C0S0CI10 Fonts 93 through 109 C0S0CR12 Fonts 110 through 111 C0S0CB12 Fonts 112 through 153 C0S0CR12 Fonts 154 through 161 C0S0ESTR Font 162 C0S0EITR Fonts 163 through 200 C0S0ESTR Fonts 201 through 210 C0D0GT13 Fonts 211 through 229 C0S0CR15 Font 230 C0D0GT15 Fonts 231 through 239 C0S0CR15 Fonts 240 through 246 C0S0CR10 Fonts 247 through 259 C0D0GT18 Fonts 260 through 273 C0S0CB10 Fonts 274 through 279 C0D0GT18 Fonts 280 through 289 C0D0GT20 Fonts 290 through 299 C0D0GT24 Fonts 300 through 2303 C0D0GT18 Fonts 2304 through 3839 or 4098 through 65279 Fonts with point size 0 through 7.5 C0D0GT18 Fonts with point size 7.6 through 9.5 C0S0CR15 Fonts with point size 9.6 through 11.5 C0S0CR12 Fonts with point size 11.6 through 13.5 C0S0CR10 Fonts with point size 13.6 and greater C0S0CB10 Fonts 3840 through 4095 (User-defined) No substitution Fonts 65280 through 65534 (User-defined) No substitution FGID range Substituted font character set name 148 IBM AS/400 Printing V All of the preceding character sets exist in 240-pel versions in the font compatibility set that is shipped with OS/400 (Library QFNTCPL in QSYS) or, for best results, in 300-pel versions in the IBM Font Collection for IBM Operating Systems (5648-113). 2. If the correct host-resident font cannot be found, the transform ignores the font request and printing continues in the current font. If this is the first font request of the document, the transform ends with an error. When an AFP document calls for a host-resident font character set and code page, the AFPDS transform gets the font character set and converts it to the proper format for printing. Font bitmaps are moved into the raster image of the page. 6.6.3 Processing AFP resources AFPDS to ASCII transform uses the new List Spooled File AFPDS Resources (QGSLRSC) and Copy AFPDS Resource (QGSCPYRS) APIs to process external resources such as character sets, overlays, and page segments. For font character sets, the AFPDS to ASCII transform always calls the List Spooled File AFPDS Resources API for the 300-pel version. If the resource cannot be found on the system, the AFPDS to ASCII transform calls the API a second time for the 240-pel version. Overlays and page segments are converted. Support for IO1 images (IOCA) and IM1 images (raster) referenced in page segments is included. Fonts referenced in overlays are processed according to the mode selected. AFP resources are cleared from the printer's memory at the end of the document. The host print transform function assumes the ASCII printer is a shared device, and there is no way to know what other applications will do to the printer. 6.6.4 Processing AFPDS barcodes A barcode is a predetermined pattern of bars and spaces that represent numeric or alphanumeric information in a machine readable form. Barcodes are commonly used in many applications including item tracking, inventory control, point-of-sale operations, and patient care. The IBM Advanced Function Print (AFP) data stream defines an architecture for presenting barcodes. The following industry barcode standards are supported by the AFPDS to ASCII transform function: • Code 39, AIM USS-39 • MSI • UPC/CGPC Version A • UPC/CGPC Version E • UPC Two-digit Supplemental • UPC Five-digit Supplemental • EAN-8 • EAN-13 • Industrial 2-of-5 • Matrix 2-of-5 • Interleaved 2-of-5, AIM USS-1 2/5 • Codabar 2-of-7, AIM USS-Codabar Chapter 6. Host print transform 149 • Code 128, AIM USS-128 • EAN Two-digit Supplemental • EAN Five-digit Supplemental • POSTNET • Japan Postal (New V4R2) • Royal Mail (New V4R2) Note: UCC/EAN-128 is supported by host print transform. UCC/EAN-128 is a standard that consists of both a barcode standard and a defined data structure. The barcode used is a subset of Code 128. More information about the Uniform Code Council and UCC/EAN-128 can be found at: http://www.uc-council.org/ Barcode support is available for PCL and PPDS data streams in mapping mode or in raster mode. In mapping mode, barcodes are implemented in the AFPDS to ASCII transform as downloaded fonts. In addition to the barcode symbol, the barcode data stream can also request that human readable interpretation (HRI) be printed. The following fonts are required to print the barcode HRI: • OCR-A • OCR-B for UPC barcodes • Device default, Gothic Roman 10 point OCR-A, OCR-B, and Gothic Roman 10 point are available in the 240-pel compatibility fonts (library QFNTCPL in QSYS). Note: For best results, we recommend that you use outline fonts or 300-pel fornts from the IBM AFP Font Collection (5648-B45). 6.6.5 How AFPDS to ASCII transform handles a no-print border Absolute movement is done with reference to the origin of the page. The AFP data stream expects the origin to be the upper left corner of the physical page. Most ASCII laser printers have a no-print border, and their origin is in the upper left corner of the printable area. AFPDS to ASCII transform uses the current no-print border values from the workstation customizing object to determine the position of the origin on the ASCII printer. In mapping mode, the AFPDS to ASCII transform adjusts cursor movement within the printable area of the page so it appears that the origin is in the upper left corner of the physical page (what an AFP data stream expects). Cursor movements within the no-print border are moved to the edge of the no-print border. AFP positions past the top and left no-print border values are reduced by no-print border values to print at the correct paper location. Note: No-print border problems in mapping mode can be corrected by changing to raster mode or removing the no-print border values from WSCST. For raster mode, the page is turned into an image and the first row and column that contains a black pel is known. If that row or column is in the no-print border, the entire image is shifted to preserve the top and left edges. This may result in data being clipped from the right and bottom edges. 150 IBM AS/400 Printing V 6.6.6 AFPDS to TIFF Host print transform can also transform an AFPDS data stream to TIFF. The data stream tag (PRTDTASTRM) in the WSCST object is used to determine the type of transform: • TIFF Packbit format if PRTDTASTRM tag set to TIFF_PB • TIFF G4 format if PRTDTASTRM tag set to TIFF_G4 AFPDS to TIFF transform works the same as the AFPDS to ASCII transform in raster mode. The following source is the full WSCST source needed to transform AFPDS to the TIFF Packbit format: :WSCST DEVCLASS=TRANSFORM. :TRNSFRMTBL. :PRTDTASTRM DATASTREAM=TIFF_PB. :INITPRT DATA ='4D4D002A'X. :RESETPRT DATA ='00000000'X. :EWSCST. To create the WSCST object for the AFPDS to TIFF transform, copy the preceding source into a source file member and use the CRTWSCST command to create and compile the object. Note: WSCST objects QWPTIFFPB and QWPTIFFG4 are available in library QSYS on V3R2 and V3R7 and later. Since this is not used for printing, there is no manufacturer type and model added for it. For example, an application program can now use the host print transform API to convert an AFPDS spooled file to a TIFF image and then present the image on an IBM 3489 InfoWindow II display. 6.6.7 Transform spooled file and write to folder A program sample for retrieving data from a spooled file, transforming it through host print transform, and writing the output to a folder is available from the IBM Redbooks Web site. This type of program can be used to transform output data (for example, AFP pages to TIFF images) from an AS/4000 output queue to a folder to be accessible to a browser. The sample code can be found at: http://www.redbooks.ibm.com On the redbooks home page, click Additional Materials. Click here for the directory listing. On the list that is displayed, search for the directory SG242160. Using FTP, you can download the command (HPTTOFLR.CMD) and the source code of the program (HPTTOFLR.C) from this directory. For the transformation, this program allows you to use any of the available Work Station Customizing (WSCST) objects. For creating output in TIFF, use the WSCST example in 6.6.6, “AFPDS to TIFF” on page 150. 6.6.8 AFPDS to ASCII transform limitations The following list describes the limitations of AFP to ASCII transform. This list is not prioritized. Chapter 6. Host print transform 151 • Dot matrix ASCII printers are not supported. Since these printers do not support absolute movement, even text does not print correctly. Text prints as one long string. • The transform does not support AFP graphics (GOCA) commands. For example, pie charts generated by BGU or GDF files imbedded in the spooled file will not print. • The transform ignores the fidelity attribute of the spooled file and always performs content printing. • The transform does not support COR and multi-up printing. • The transform does not support color barcodes. • At this time, the transform can only produce 240 or 300 dpi images. 6.7 Host print transform customization If you do not find your printer in the list of the manufacturer type and model (MFRTYPMDL) special values, or if you need additional print functions, you can specify a workstation customized (WSCST) object instead of a MFRTYPMDL special value. Before you can begin customizing an ASCII printer, you must have information on the functions that the ASCII printer supports. You can only add or change printing functions that a printer supports. You also need the hexadecimal values for these functions. Often, the technical reference manual for the printer provides this information. The source of a WSCST object is a tag language. Tags can contain information for host print transform, hard-coded printer commands, or printer commands with replacement parameters (variables). Figure 116 shows an example of the WSCST source and the three tag types. Figure 116. WSCST source and tag types 152 IBM AS/400 Printing V Use the following steps to customize the functional characteristics of an ASCII printer: 1. Use the RTVWSCST command to retrieve an existing WSCST object into a source physical file. 2. Use SEU or the STRPDM command to update or change the WSCST source file. 3. Use the CRTWSCST command to compile or create a customized WSCST object. 4. Specify *WSCST as the MFRTYPMDL value in the printer device description, in the CRTOUTQ/CHGOUTQ if you are using remote system printing, or in the SNDTCPSPLF command. 5. Specify the name of your WSCST object in the WSCST parameter in the device description, in the CRTOUTQ/CHGOUTQ if you are using remote system printing, or in the SNDTCPSPLF command. Customizing an ASCII printer may involve a trial-and-error process. The amount of time required to customize a printer depends on the type of printer, regardless of whether the printer is already supported by the AS/400 system, and the completeness of the manual for the printer. Plan anywhere from one to five days to complete a successful ASCII printer customization. For detailed information on customizing a WSCST object, see AS/400 Printing IV, GG24-4389. The “Advanced host print transform customization” chapter contains an example and a description of the different tags. The manual AS/400 Workstation Customization Programming, SC41-3605, also contains a description of all the tags. 6.8 New and enhanced tags for WSCST objects The following list describes the new and changed tags for the host print transform WSCST objects: • PRTDTASTRM (Printer Data Stream): The PRTDTASTRM tag defines the data stream of the ASCII printer. This tag is currently defined, but the following data stream values are added: – IBMPPDS3: The IBM personal printer data stream level 3 is supported. This is used for the IBM 4019 printer. Supported functions over level 2 are page rotation and non-compressed image. – IBMPPDS4: The IBM personal printer data stream level 4 is supported. This is used for the IBM 4029 printer. Supported functions over level 3 are multiple rotations on a page and compressed image. – IBMPPDS3I: The IBM personal printer data stream level 3 is supported in raster mode. This value means the same to SCS to ASCII transform as IBMPPDS3 since it only supports the mapping mode. For AFP to ASCII transform, this value causes it to go into raster mode for a PPDS level 3 (4019) printer. Chapter 6. Host print transform 153 – IBMPPDS4I: The IBM personal printer data stream level 4 is supported in raster mode. This value means the same to SCS to ASCII transform as IBMPPDS4 since it only supports the mapping mode. For AFP to ASCII transform, this value causes it to go into raster mode for a PPDS level 4 (4029) printer. – HPPCL4I: The Hewlett Packard PCL4 printer data stream is supported in raster mode. This value means the same to SCS to ASCII transform as HPPCL4 since it only supports the mapping mode. For AFP to ASCII transform, this value causes it to go into raster mode for a PCL4 printer. – HPPCL5I: The Hewlett Packard PCL5 printer data stream is supported in raster mode. This value means the same to SCS to ASCII transform as HPPCL5 since it only supports the mapping mode. For AFP to ASCII transform, this value causes it to go into raster mode for a PCL5 printer. – TIFF_PB: This value is used for AFPDS to TIFF format transform. With this value, the image is generated in TIFF Packbit format. – TIFF_G4: This value is used for AFPDS to TIFF format transform. With this value, the image is generated in TIFF G4 format. – IOCA_G3MH (V3R7 and later): To support fax when the IBM 7852-400 modem is used as a fax controller. – IOCA_G3MRK2 (V3R7 and later): To support fax when the IBM 7852-400 modem is used as a fax controller. – IOCA_G3MRK4 (V3R7 and later): To support fax when the IBM 7852-400 modem is used as a fax controller. • HORAMOV (Horizontal Absolute Move): The HORAMOV tag adjusts the print position in the current line according to the value given in the command. The format of the tag is: :HORAMOV VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • VERAMOV (Vertical Absolute Move): The VERAMOV tag adjusts the print position in the current column according to the value given in the command. The format of the tag is: :VERAMOV VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator 154 IBM AS/400 Printing V CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • RASEND (Raster Graphics End): Marks the end of a raster graphics image. The format of the tag is: :RASEND ASCII control sequence. • TOPMARGINI (Set Top Margin in Inches): Sets the top of the page in inches. The format of the tag is: :TOPMARGINI VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • TEXTLENL (Set Text Length): Sets the length or bottom margin of the page. The format of the tag is: :TEXTLENL VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN DATA = ASCII control sequence. • PRTNXTCHR (Print Next Character): Causes the printer to treat the next code point as a graphic character. The format of the tag is: :PRTNXTCHR DATA = ASCII control sequence. • PRTANGLE (Print Angle): Changes the direction of future printing on the page. This allows printing in all four directions on the same page. The format of the tag is: :PRTANGLE ANGLE = 0 | 90 | 180 | 270 DATA = ASCII control sequence. 6.9 New MFRTYPMDL special values These new manufacturer type and model (MFRTYPMDL) special values provide default paper sizes. You can use them when no device description exists for the target printer (for example, when the printer is attached using TCP/IP LPR-LPD and a remote output queue is used). Note: When a device description exists for the target printer, the default paper sizes are specified in the device description. These new MFRTYPMDL special values are available with Version 3.0 Release 2.0 and Version 3.0 Release 7.0 and later: *WSCSTLETTER Set Letter format *WSCSTLEGAL Set Legal format Chapter 6. Host print transform 155 *WSCSTEXECUTIVE Set Executive format *WSCSTA3 Set A3 format *WSCSTA4 Set A4 format *WSCSTA5 Set A5 format *WSCSTB4 Set B4 format *WSCSTB5 Set B5 format *WSCSTCONT80 Set continuous form 80 characters *WSCSTCONT132 Set continuous form 132 characters *WSCSTNONE Paper size not specified (no Set paper size command in the data stream) If you have a printer device description, you must also specify *NONE for the default paper size parameters. If you don’t, the value from the paper size parameters will override the value of the WSCST object. Note: If no paper size is specified, no COR will occur. It can be used to disable the COR function. To use these new WSCST objects, complete the following steps: 1. Retrieve the workstation customized object, for example: RTVWSCST DEVTYPE(*TRANSFORM) MFRTYPMDL(*IBM4317) SRCMBR(NP17SRC) SRCFILE(QGPL/QTXTSRC) 2. Create a customized workstation configuration object: CRTWSCST WSCST(QGPL/NP4317) SRCMBR(NP17SRC) You will receive the message “Customization object NP4317 created successfully”. 3. Stop the remote writer: ENDWTR WTR(outputq_name) OPTION(*IMMED) 4. To change the output queue, enter the CHGOUTQ command, and press the F4 (Prompt) function key. Then page down until you see the parameters shown in Figure 117. Figure 117. Change Output Queue: HPT and WSCST parameter On this display, enter the following parameter values: • Manufacturer type and model: *WSCSTA4 (or any from the other formats) • Workstation customizing object: NP4317 (the object that you created with the command CRTWSCST) Change Output Queue (CHGOUTQ) Type choices, press Enter. ......................... ....... ........ Host print transform . . . . . . *YES *YES, *NO Manufacturer type and model . . *wscsta4 Workstation customizing object NP4317 Name, *NONE Library . . . . . . . . . . . qgpl Name, *LIBL, *CURLIB ......................... ....... ........ F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 156 IBM AS/400 Printing V • Library: QGPL (the library specified in the CRTWSCST command) 5. Press Enter to modify your output queue. 6.10 DBCS support in host print transform In August 1996, host print transform was enhanced through a number of V3R2 PTFs to support double-byte character set (DBCS) printing. These enhancements can also be found in the base of the V3R7 and later versions and releases. These enhancements allow DBCS printing to ASCII printers through the Send TCP Spooled file (LPR) command or the remote system printing and ASCII LAN attached printer option where host print transform is the only transform option. They can also be used in place of the transform found on PC and terminal emulators, but only if they emulate a 3812 printer. 6.10.1 DBCS SCS to ASCII transform The host print transform uses AS/400 ICONV support to convert EBCDIC data to ASCII data. 6.10.1.1 DBCS EBCDIC to ASCII transform FROM to CCSID mapping tables provide the mapping of CCSIDs to convert a double-byte EBCDIC character in an application data stream into an ASCII character code value (for that same character). The workstation customizing object provides new tags to identify the FROM(EBCDIC) CCSID and the TO (ASCII) CCSID. If no tag is specified in the workstation customizing object, the FROM-TO assignment is made according to the information in Table 15. Table 15. Default from or to the CCSID table From CCSID Default CCSID Language 5026 932 Japanese 5035 932 Japanese 930 932 Japanese 931 932 Japanese 939 932 Japanese 933 949 Korean 937 959 Traditional Chinese 935 1381 Simplified Chinese Chapter 6. Host print transform 157 6.10.1.2 New WSCST objects for DBCS The DBCS WSCST objects and their corresponding manufacturer type and model (MFRTYPMDL) special values shown in Table 16 were added as part of this enhancement for SCS to ASCII transform. Table 16. DBCS WSCST objects and corresponding MFRTYPMDL 6.10.2 DBCS AFPDS to ASCII transform Host print transform processes DBCS AFP print jobs in raster mode only. For more information on the raster mode, see 6.6.2, “Raster mode” on page 146. That is, a raster image of each page is created in AS/400 memory and sent to the printer. The ASCII printer must accept raster images to work with the AFPDS to ASCII transform. The main change to the AFPDS to ASCII transform is the support of DBCS fonts. The host print transform requires the DBCS fonts selected in a DBCS AFP print job to be loaded on the AS/400 system. DBCS fonts that reside on the ASCII printer are not used to process DBCS print jobs. Host print transform has also been enhanced to support a character rotation of 270 degrees. DBCS languages use a character rotation of 270 degrees to implement right-to-left, top-to-bottom printing. 6.10.3 New tags and supported data streams for DBCS The following new tags are added for the Host Print Transform workstation customizing objects: • EBCASCCSID (EBCDIC-to-ASCII CCSID mapping): Use the EBCACCSID tag to begin a group of one or more EBCASCCSIDE tags. This tag must be followed by one or more CCSID mapping entries. There are no parameters on this tag. The syntax for this tag is: :EBCASCCSID. • EBCASCCSIDE (EBCDIC-to-ASCII CCSID mapping entry): This new tag defines the mapping of double-byte EBCDIC CCSIDs to their ASCII CCSID. The ECBCASCCSIDE tag must follow an EBCASCCSID tag. The syntax of this tag is: :EBCASCCSIDE EBCDICCSID = EBCDIC CCSID (integer) ASCIICCSID = ASCII CCSID (integer). EBCDICCSID is a required parameter. It defines the EBCDIC CCSID identifier. The CCSID is a registered EBCDIC identifier used to specify the CCSID of the source characters. WSCST MFRTYPMDL Description QWPESCP *ESCPDBCS Epson ESC/P DBCS printers QWPIBM2414 *IBM5575 IBM Non-Pages PS/55 printers QWPPAGES *IBMPAGES IBM Pages PS/55 printers QWPNEC201 *NECPCPR201 NEC PC-PR101/201 printers QWPLIPS3 *CANLIPS3 Canon LIPS# printers 158 IBM AS/400 Printing V ASCIICCSID is a required parameter. It defines the ASCII CCSID identifier. The CCSID is a registered ASCII identifier used to specify the CCSID of the target characters. • EEBCASCCSID (End EBCDIC-to-ASCII CCSID mapping table entry): Use the EEBCACCSID tag to end the EBCDIC-to-ASCII CCSID mapping customization. The syntax for this tag is: :EEBCASCCSID. • PRTALLCHR (Print All Characters): This command causes the printer to interpret the bytes that follow as printable characters rather than control codes. Note that the PRTNXTCHR provides the same function, but only for one byte. The syntax of this tag is: :PRTALLCHR VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN DATA = ASCII control sequence. • SI (Shift IN): This command causes the printer to interpret the bytes that follow as SBCS characters. The syntax of this tag is: :SI DATA = ASCII control sequence. • SO (Shift OUT): This command causes the printer to interpret the bytes that follow as DBCS characters. The syntax of this tag is: :SO DATA = ASCII control sequence. • DBSPACE (DBCS Space): The DBSPACE tag defines the ASCII control sequence for the double-byte space control function for an ASCII printer. The syntax of this tag is: :DBSPACE DATA = ASCII control sequence. • CHRORIENT (Character Orientation): The CHRORIENT tag defines the control sequence for setting different character orientations. The syntax of this tag is: :CHRORIENT ORIENT = PORTRAIT|LANDSCAPE|RTT180|RTT270 DATA = ASCII control sequence. • SCPITCH (Set Character Pitch): The SCPITCH tag defines the control sequence for setting the number of characters per inch. The syntax of this tag is: :SCPITCH VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator Chapter 6. Host print transform 159 CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • SLPITCH (Set Line Pitch): The SLPITCH tag defines the control sequence for setting the number of lines per inch. The syntax of this tag is: :SLPITCH VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • FONTSCALING (Set Font Size Scaling): The FONTSCALING tag defines the control sequence for setting the font size scaling. The syntax of this tag is: :FONTSCALING VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator CNVDEN = conversion ratio denominator DATA = ASCII control sequence. • FONTSCALE (Set Font Size Scale): The FONTSCALE tag defines the control sequence for setting the font size scaling. The syntax of this tag is: :FONTSCALE SCALE = 1VX1H | 2VX1H | 1VX2H | 2VX2H DATA = ASCII control sequence. • CPI (Set Characters per Inch): The CPI tag defines the control sequence for setting the number of characters per inch. New values for 6, 6.7, 7.5, and 18 cpi are added to this tag. The syntax of this tag is: :CPI CPI = 5|6|67|75|10|12|133|15|166|171|18|20|25|27 DATA = ASCII control sequence. • GLTYPE (Set Grid Line Width): The GLTYPE tag defines the control sequence for setting the grid line type. The syntax of this tag is: :GLTYPE VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRN DATA = ASCII control sequence. • GLWIDTH (Set Grid Line Type): The GLWIDTH tag defines the control sequence for setting the grid line width. The syntax of this tag is: :GLTYPE VAROFFSET = variable offset in control sequence 160 IBM AS/400 Printing V VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN DATA = ASCII control sequence. • DRAWLINE (Draw Grid Line): The DRAWLINE tag defines the control sequence for the draw grid line function. The syntax of this tag is: :DRAWLINE VAROFFSET = variable offset in control sequence VARLEN = variable length VARTYPE = HIGHLOW|LOWHIGH|CHRDEC|CHRHEX|CHRAN CNVNUM = conversion ratio numerator CNVDEN = conversion ratio denominator DATA = ASCII control sequence. To support the new DBCS printers, new data stream values are available to the PRTDTASTRM tag. These new values are: • IBMNONPAGES: The IBM DBCS Non-Pages (dot matrix printers) data stream is supported. • IBMPAGES: The IBM DBCS Pages data stream is supported. • ESC/P: The Epson DBCS ESC/P data stream is supported. • LIPS2+: The Canon DBCS LIPS2+ data stream is supported. • LIPS3: The Canon DBCS LIPS3 data stream is supported. © Copyright IBM Corp. 2000 161 Chapter 7. Image print transform This chapter provides information about the image print transform function available with Version 4.0 Release 2.0, and describes how to enable it to provide additional support for printers that are attached to the AS/400 system. The image print transform function is an OS/400 function that is capable of converting image or PostScript data streams into AFPDS or ASCII printer data streams. The conversion takes place on the AS/400 system, which means the data stream generated is independent of any printer emulators or hardware connections. 7.1 Image print transform function The image print transform function (Figure 118) converts image or print data from one format into another. The resultant data stream is a printer data stream. Therefore, it is capable of being interpreted by a supporting printer. Figure 118. Image print transform function The image print transform function can convert the following data streams: • Tag Image File Format (TIFF) • Graphics Interchange Format (GIF) • OS/2 and Windows Bitmap (BMP) • PostScript Level 1 The image print transform function can generate the following data streams: • Advanced Function Printing Data Stream (AFPDS) • Hewlett-Packard Printer Control Language (PCL) • PostScript Level 1 PostScript (PS) TIFF GIF BMP PostScript TIFF GIF BMP Print Services Facility/400 Host Print Transform IPDS Printer AFP(*YES) ASCII Printer IPDS AFPDS PS PCL PS PCL AFPDS PostScript TIFF GIF BMP Image Print Transform Spool Image Print Transform API QIMGCVTI Integrated File System (IFS) IBM Network Station CA/400 Network Printing 162 IBM AS/400 Printing V Similar to the host print transform function, the image print transform function converts the data on the AS/400 system instead of using an emulator. When a data stream is converted by the image print transform function, the printer data stream that is created contains a bit-mapped image. A bit-mapped image is an array of numeric values. Each value represents part or all of a pixel. A pixel is a single point or dot of an image. An image is usually measured in terms of pixels for both width and height. The resolution of an image is then defined as the number of pixels (dots) per unit of measure. For example, a resolution supported by many printers is 300 dots per inch (dpi). Therefore, an image having dimensions of 1200 pixels by 1500 pixels has a width of 4 inches and a height of 5 inches when it is printed at 300 dpi. 7.2 Why use image print transform There are many advantages for using the image print transform function. • Support for Intelligent Printer Data Stream (IPDS) printers: TIFF, GIF, and BMP image files, as well as PostScript Level 1 files, can be converted to AFPDS format and printed on IPDS printers configured AFP(*YES). • Support for ASCII printers: TIFF, GIF, and BMP image files, as well as PostScript Level 1 files, can be converted to PCL-5 and PostScript Level 1 format and printed on ASCII printers supporting these standards. Note: You cannot convert from one type of PostScript to another using the image print transform function. When the input and output data streams are PostScript, the data is sent directly to the output destination without conversion. • Customized printer support: Image configuration objects are used with the image print transform function to specify certain characteristics of the converted data streams. When associated with the device description information for a printer connected to the AS/400 system, image configuration objects act as a template for the converted data stream. Attributes, such as data stream format, color, and resolution, are all specified in the image configuration object. • Additional capabilities: In addition to converting data from one format to another, other functions can be performed by the image print transform function. Among these are the ability to reduce color, compress data, and change photometricity. For more information about the features of the image print transform function, consult AS/400 System API Reference, SC41-5801. Note: You cannot perform functions that your printer does not support. For example, you cannot print in landscape orientation when your printer only supports portrait orientation. Chapter 7. Image print transform 163 7.3 Image print transform process The image print transform function converts data from one image or print data stream format to another. In the process, image processing functions can be performed, including conversion from color to gray to bi-level, re-sizing, compression, and decompression. The convert image API (QIMGCVTI) accepts an input data stream from an integrated file system (IFS) file, a spooled file, or memory, and sends the converted data stream to a file, spooled file, or memory. The user may select an image configuration to describe the output data similar to selecting a device description. Image print transform determines the required transformations from the input data stream and the image configuration object without further assistance from the user. The interfaces also allow the user to directly specify attributes of the input and output data streams or to override individual attributes in the image configuration. Figure 119. Converting data streams using image print transform In pre-spool mode (Figure 119), image print transform runs in the job calling the API. Input parameters, along with the image configuration object, are used to control the transform. A new spooled file is created, and image print transform writes the converted data stream to it. It also sets the appropriate spooled file attributes to describe the data stream (print data). Image print transform is integrated with spooled file processing so that any of the supported data stream formats can be spooled to any dot-addressable printer connected to the AS/400 system. The AS/400 system detects and performs the required transforms without assistance from the user. To achieve this goal, image print transform is called by the same application program interface (API) that calls the host print transform. Although most users choose to delay any transforms until print time, the API also allows transforms before spooling the file. Therefore, the user can control whether the processing occurs in pre-spool or post-spool mode. In post-spool mode, see Figure 120 on page 164 if the target printer is an ASCII printer. See Figure 121 on page 164 if the target printer is an IPDS AFP(*YES) Image Print Transform Integrated File System (IFS) Spooled Files Memory Integrated File System (IFS) Spooled Files Memory Image Print Transform (API) QIMGCVTI Image Configuration Object User Job 164 IBM AS/400 Printing V printer. The image print transform function is called automatically by the system as part of spool processing. Figure 120. Printer writer or remote writer with HPT (*YES) The driver for ASCII printers calls the host print transform API interface program, which reads the spooled file attributes to determine whether to call host print transform or the image print transform post-spool interface. If the image print transform post-spool interface is called, it reads the device description, image configuration object, and the spooled file attributes directly to determine the required output format and resolution of the printer. If data-stream transform is not possible, the post-spool interface returns an indicator to that effect to the host print transform API interface program. Figure 121. Image print transform and PSF/400 If the target printer is IPDS (AFP*YES), Print Services Facility/400 (PSF/400) selects a spooled file to be processed. If the spooled file is *USERASCII, PSF/400 calls host print transform to find out if the spooled file can be transformed. If the spooled file can be transformed, image print transform makes the transformation, one buffer at a time, into AFPDS depending on the printer device description image configuration object and passes the transformed buffer back to PSF/400. Note: As the spooled file is transformed buffer by buffer, this process results in poor performance. Consider the usage carefully. Output Queue Host Print Transform Spooled Files Image Configuration Object Image Print Transform Interface to Printer Interface to Remote Queue USERASCII in AFPDS out Host Print Transform Output Queue Spooled Files Image Configuration Object Image Print Transform IPDS Printer AFP(*YES) Print Services Facility/400 Chapter 7. Image print transform 165 If the spooled file cannot be transformed, the spooled file is held, and an error message is returned to the QSYSOPR message queue. 7.3.1 Where output attributes are derived The following output attributes are derived from the image configuration object unless specified otherwise in the user-defined data attribute of the spooled file: • Data stream format • Photometric interpretation • Resolution units • Horizontal resolution • Vertical resolution • Compression type • Bits per sample • No print borders (left, right, top, bottom) The following output attributes are derived from the printer file (for example, spooled file attributes) if the output data stream format is AFPDS and the printer is an IPDS printer that has AFP(*YES) specified in the configuration. • Output queue • Paper size The output attribute paper size is derived from the printer device description if the output data stream format is PCL5 or PostScript. 7.4 Printing with the image print transform function The image print transform function works with both ASCII and IPDS printers that have AFP(*YES) specified in the configuration. When the image print transform function is used, the transform does not take place until after the data stream is spooled. Then, when the spooled file is printed or sent to a remote output queue, it is first sent to the image print transform function to be transformed. Once a printer device is created with the image print transform function enabled, printing with the image print transform function is done automatically. 7.4.1 Printing to an ASCII printer To enable the image print transform function when printing to an ASCII printer, complete the following steps: • Ensure that the spooled file is a *USERASCII spooled file. • Verify that the printer device description has the TRANSFORM field set to *YES. • Verify that the printer device description has the IMGCFG field set to a valid value other than *NONE. The TRANSFORM field and the IMGCFG field can be set when the device description is created with the CRTDEVPRT command, or changed after the device description is created with the CHGDEVPRT command. 166 IBM AS/400 Printing V 7.4.2 Printing to an IPDS printer To enable the image print transform function when printing to an IPDS printer that has AFP(*YES) specified in the configuration, complete the following steps: • Ensure that the spooled file is a *USERASCII spooled file. • Verify that the printer device description has the IMGCFG field set to a valid value other than *NONE. The IMGCFG field can be set either when the device description is created with the CRTDEVPRT command, or changed after the device description is created with the CHGDEVPRT command. 7.4.3 Sending the spooled files To enable the image print transform function when using remote system printing for sending the spooled files through a remote output queue, complete the following steps: 1. Ensure that the spooled file is a *USERASCII spooled file. 2. Verify that the output queue has the TRANSFORM field set to *YES. 3. Verify that the output queue has the IMGCFG field set to a valid value other than *NONE. The TRANSFORM field and the IMGCFG field can be set when the output queue is created with the Create Output Queue (CRTOUTQ) command, or changed after the output queue has been created with the Change Output Queue (CHGOUTQ) command. 7.5 Image configuration objects An image configuration object contains various printer characteristics that the image print transform function and the convert image API use when creating output. An image configuration object is basically a list of characteristics that is supported by the printer it represents, acting as a template that guides the transform process. Each image configuration object has values for the following fields: • Image format • Photometric interpretation • Bits per sample • Resolution units • Horizontal resolution • Vertical resolution • Compression type • No-print borders (left, right, top, bottom) All of these fields can be overridden by using the convert image API and specifying a value for the field of the same name. 7.5.1 Values of image configuration objects The following special values are allowed for the image configuration (IMGCFG) field of the CRTDEVPRT, CHGDEVPRT, CRTOUTQ, and CHGOUTQ commands. Chapter 7. Image print transform 167 You can also use these values when calling the convert image API. For more information, see AS/400 System API Reference, SC41-5801. Each special value is described in terms of the data streams that are supported, the maximum resolution in dots per inch (dpi), and whether the printer has color or does not support compression. The following list contains examples of image configuration objects, grouped by type of printer. Note: For a complete list of all the image configuration objects, see AS/400 Printer Device Programming, SC41-5713, or AS/400 System API Reference, SC41-5801. • Printers supporting PCL data streams (*IMGA01-*IMGA09) *IMGA01 PCL 300-dpi printer *IMGA04 PCL 300-dpi color printer • Printers supporting PostScript data streams (*IMGB01-IMGB15) *IMGB01 PostScript 300-dpi printer *IMGB04 PostScript 300-dpi color printer • Printers supporting IPDS data streams (*IMGC01-*IMGC11) *IMGC01 IPDS 240-dpi printer *IMGC02 IPDS 300-dpi printer • Printers supporting PCL and PostScript data streams (*IMGD01-*IMGD11) *IMGD01 PCL/PostScript 300-dpi printer *IMGD02 PCL/PostScript 600-dpi printer The recommended image configuration objects for some common printers are in the following list. Note: For a complete list, see AS/400 Printer Device Programming, SC41-5713, or AS/400 System API Reference, SC41-5801. *IMGB11 Epson Stylus Color 600, 800 with PostScript *IMGD01 HP Laserjet III, IIID, IIISi, 4L with PostScript *IMGA02 HP Laserjet 4, 4P, 4V, 4Si, 4 Plus *IMGA02 HP Laserjet 5, 5P, 5Si *IMGA02 HP Laserjet 6, 6P, 6L *IMGC01 IBM 3130, 3160-1 AF Printer (240-pel mode) *IMGC02 IBM 3130 AF Printer (300-pel mode) *IMGC06 IBM 4028 Laser Printers *IMGB05 IBM 4303 Network Color Printer *IMGC06 IBM 4312, 4317, 4324 NP with IPDS feature (LAN) *IMGA02 IBM 4312, 4317, 4324 NP (ASCII/LAN) *IMGD02 IBM 4312, 4317, 4324 NP with PostScript (ASCII/LAN) *IMGC03 IBM Infoprint 60 *IMGC05 IBM Infoprint 62 Model 2 *IMGC06 IBM Infoprint 62 Model 3 *IMGC05 IBM Infoprint 4000 *IMGA02 Lexmark Optra S Printers *IMGD05 Lexmark Optra SC Color Printer *IMGA02 Okidata OL800, OL810 LED Page Printers 168 IBM AS/400 Printing V *IMGD04 QMS Magicolor CX *IMGB06 Tektronix Phaser 560 *IMGA02 Xerox 4230 DocuPrinter 7.6 Printing with the convert image API The convert image QIMGCVTI API provides the same transform capabilities as the image print transform function. In addition, printing with the convert image API gives the user more control over how the output looks than the image print transform function offers. It gives the user the ability to immediately transform a data stream when delaying the transform is not desired. It also has more options regarding the type of input object and output object. The convert image API supports input and output from an integrated file system (IFS) file, a spooled file, or main storage. The convert image API can generate a spooled file that is transformed with the image print transform function. When this is done, the convert image API stores all the values needed to do the transform in the user-defined data attribute of the spooled file for later use by the image print transform function when the transform is performed. For more information on how to use the convert image API, see the AS/400 System API Reference, SC41-5801. 7.7 Converting PostScript data streams Converting PostScript data streams is performed differently from converting image data streams. PostScript conversion requires the font files to rasterize the data. You can also find more debugging and message information if the PostScript file does not convert correctly. 7.7.1 Fonts To convert PostScript files effectively, PostScript fonts are required to convert text and symbols into bit-mapped images. The following lists of fonts are supplied by IBM for use with the image print transform function. Each set of fonts is located in the IFS in the specified directory. For each font name, there is a corresponding font file containing rasterization information. This information is stored in the psfonts.map file. Note: Do not alter the font files or the psfonts.map file shipped with OS/400. Changing a font file or font mapping can cause the image print transform function to produce unpredictable as well as undesirable results. The Latin fonts are stored in the /QIBM/ProdData/OS400/Fonts/PSFonts/Latin directory. The Symbol fonts are stored in the /QIBM/ProdData/OS400/Fonts/PSFonts/Symbols directory. Note: For a list of the IBM supplied PostScript fonts, see AS/400 Printer Device Programming, SC41-5713. Chapter 7. Image print transform 169 7.7.2 User-supplied fonts To enhance the capabilities of the image print transform function, you can add your own font files to be used in conjunction with the IBM-supplied fonts shipped with OS/400. These fonts are called user-supplied fonts. They need to be stored in the /QIBM/UserData/OS400/Fonts/PSFonts directory: The user-supplied font mapping file (psfonts.map) is stored in the same directory as the user supplied fonts. It behaves the same way as the psfonts.map file that is shipped with OS/400. An important difference is that you can find user-supplied fonts by looking first at the user-supplied font mapping file and then at the OS/400 font mapping file. To add a user-supplied font, complete the following steps: 1. Use an ASCII text editor to open the psfonts.map file located in /QIBM/UserData/OS400/Fonts. If this file does not exist, you need to create it. 2. Add a new line to the file to include the new font name and associated path and file name, for example: font MyNewFont /QIBM/UserData/OS400/Fonts/PSFonts/MNF.PFB Here, MyNewFont is the name of the font, and MNF.PFB is the associated font file. 3. Save the new psfonts.map file. 4. Copy the font file into the directory specified in the psfonts.map file. To delete a user-supplied font, simply remove the line mapping the font name to its associated file in psfonts.map, and remove the font file from the AS/400 system. 7.7.3 Font substitution When a font requested within a PostScript data stream is not available on the AS/400 system, a font substitution can be defined if there is a similar font available. Font substitution is the mapping of a font name to a font that is available and similar (in terms of its rasterization properties) to the font file being replaced. You can also specify font substitution if existing font mapping is producing undesirable output. To define a font substitution, complete the following steps: 1. Use an ASCII text editor to open the psfonts.map file that is located in /QIBM/UserData/OS400/Fonts. If this file does not exist, you need to create it. 2. Add a new line to the file to include the font name and the path and file name of the font file you want to use as a substitute, for example: font Courier /QIBM/UserData/OS400/Fonts/PSFonts/HEL.PFB 3. Save the new psfonts.map file. 170 IBM AS/400 Printing V 7.8 Troubleshooting The following answers are to questions that may arise when you use the image print transform function or convert image API: • Why does it take so long to process PostScript data streams? One reason why PostScript data streams take a long time to process is the amount of information that needs to be transformed. Color documents especially require large amounts of memory and many data conversions, which means longer processing times. Note: If the photometricity of the converted data stream is not requested, it is assumed, by default, to be RGB, or color. However, if you know you do not want RGB, or the data stream is not color, specify an image configuration object that only supports black and white output. This greatly increases the throughput of the image print transform function and speeds up PostScript processing. • Why is the converted data stream positioned incorrectly on or off the page? Why is it not centered? The resolution specified in the image configuration object is probably not supported by the printer with which the object is configured. When this happens, an incorrect no-print border is retrieved from the image configuration object and the data is consequently positioned incorrectly on the output page. The printer may also be set up to automatically add a no-print border, which will cause the output generated by the image print transform function to be shifted on the page. Verify that the correct image configuration object is being used with the printer, and that the printer has been set up properly and has been physically calibrated. • Why did my PostScript data stream not generate a new data stream? Chances are that the PostScript data stream did not contain any printable data. To verify this, check the job log of the writer invoking the image print transform function. Look for a message indicating no printable data was found. If no message exists, an error may have occurred processing the file, in which case, refer to the PostScript processing job for more information. • Why is the printed image three times the original size when converted from color or gray scale to black and white? When a color image or gray scale image is converted to black and white, a dithering process takes place. In this process, a single color or gray scale pixel is transformed into a 3x3 matrix of pixels. Each pixel within this matrix is either black or white, depending on the color being rendered. © Copyright IBM Corp. 2000 171 Chapter 8. Remote system printing Remote system printing allows spooled files created on an AS/400 system to be automatically sent to and printed on other systems. 8.1 Remote system printing overview The source system must be at Version 3.0 Release 1.0 or later to support remote system printing. The spooled files are sent from an output queue using the Start Remote Writer (STRRMTWTR) command. The STRRMTWTR command allows spooled output files to be automatically sent to other systems using SNA distribution services (SNADS), Transmission Control Protocol/Internet Protocol (TCP/IP), or Internetwork Packet Exchange (IPX). A user-defined connection is also supported with all the destination types (DESTTYPE). Figure 122 shows the physical connections and the communications protocols used to connect the remote systems. Figure 122. Remote system printing overview The following parts of remote system printing are already documented with configuration examples, supported data stream tables, and AFP resources considerations in either AS/400 Printer Device Programming, SC41-5713, or AS/400 Printing IV, GG24-4389, and are not discussed in this chapter. • AS/400 to AS/400 Version 3 and later • AS/400 to AS/400 Version 2 • AS/400 to S/390 system • AS/400 to Print Services facility/2 (PSF/2) • AS/400 to RS/6000 (with destination type *OTHER) Note: See Appendix H, “AS/400 to AIX printing” on page 367, for more information. SNA TCP/IP DESTTYP: *OS/400 DESTTYP: *OS/400V2 DESTTYP: *S390 DESTTYP: *PSF2 DESTTYP: *OTHER DESTTYP: *NETWARE3 DESTTYP: *NDS IPX IPX SNA TCP/IP TCP/IP SNA PSF/2 Other AS/400 NetWare4 NetWare3 AS/400 Output Queue AS/400 V2 S/390 SNA 172 IBM AS/400 Printing V 8.2 AS/400 system and TCP/IP LPR-LPD printing You can request to have your spooled file sent and printed on any system in your TCP/IP network. The line printer requester (LPR) is the sending, or client portion, of a spooled file transfer. On the AS/400 system, the Send TCP/IP Spool File (SNTCPSPLF) command, the TCP/IP LPR command, or remote system printing provide this function by allowing you to specify what system you want the spooled file printed on and how you want it printed. When sending a spooled file, the host print transform function can also be used to transform SCS or AFPDS spooled files into ASCII. Printing the file is done by the printing facilities of the destination system. The destination system must be running TCP/IP. The line printer daemon (LPD) is the process on the destination system that receives the file sent by the LPR function (Figure 123). Figure 123. TCP/IP line printer requester: Line printer daemon The objective of this section is to explain the case when the target printer is connected using an interface such as an IBM Network Printer with a LAN card, an IBM Network Print Server, an HP JetDirect card, or a Lexmark MarkNet XLe. Note: If the target printer supports PJL/PCL commands and you are at Version 3.0 Release 7.0 or later, we recommend that you connect your printer directly on the LAN with the PJL driver. For detailed information, see 11.2.2, “Configuring LAN-attached ASCII printers using PJL drivers” on page 241. 8.2.1 Creating the output queue To create the remote output queue for your printer. Follow these steps: 1. Type the CRTOUTQ (Create Output Queue) command on any command line and press the F4 (Prompt) function key. The display shown in Figure 123 appears. Note: The following Create Output Queue displays are at V3R7 and later. Some parameters are not present at V3R1, V3R2, or V3R6. Return Code Data File(s) Control File Host Print Transform L P R AS/400 Output Queue Remote Printer Queue L P D Printer Chapter 8. Remote system printing 173 Figure 124. Create Output Queue (Part 1 of 6) 2. On this display, enter the following parameter values: • Output queue: The name of your output queue (in this example, RMT) • Library: A library name (in this example, MYLIB) • Remote system: *INTNETADR or the host name (if defined in TCP/IP Host Table Entries) Leave the default value for the other parameters, and press the Enter key to continue. The display shown in Figure 125 appears. Figure 125. Create Output Queue (Part 2 of 6) 3. On this display, enter the parameter value for Remote printer queue. The name of the remote printer queue is determined by the interface used. In this example, the target printer is an IBM Network Printer with a LAN interface card and the queue name is PASS. See 12.1.9, “Remote printer queue names” on page 258, for the recommended queue names depending on the type of interface used (HP JetDirect, MarkNet XLe, and so on). To continue, press the Page Down key. The display shown in Figure 126 on page 174 appears. Create Output Queue (CRTOUTQ) Type choices, press Enter. Output queue . . . . . . . . . . RMT Name Library . . . . . . . . . . . MYLIB Name, *CURLIB Maximum spooled file size: Number of pages . . . . . . . *NONE Number, *NONE Starting time . . . . . . . . Time Ending time . . . . . . . . . Time + for more values Order of files on queue . . . . *FIFO *FIFO, *JOBNBR Remote system . . . . . . . . . *INTNETADR Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Output Queue (CRTOUTQ) Type choices, press Enter. Output queue . . . . . . . . . . > RMT Name Library . . . . . . . . . . . MYLIB Name, *CURLIB Maximum spooled file size: Number of pages . . . . . . . *NONE Number, *NONE Starting time . . . . . . . . Time Ending time . . . . . . . . . Time + for more values Order of files on queue . . . . *FIFO *FIFO, *JOBNBR Remote system . . . . . . . . . > *INTNETADR Remote printer queue . . . . . . 'PASS' More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 174 IBM AS/400 Printing V Figure 126. Create Output Queue (Part 3 of 6) 4. On this display, enter the following parameter values: • Writer to autostart: 1 • Connection type: *IP • Destination type: *OTHER Leave the default values for the other parameters, and press the Enter key to continue. The display shown in Figure 127 appears. Figure 127. Create Output Queue (Part 4 of 6) 5. On this display, leave the default value (*YES) for the host print transform parameter. To continue, press the Enter key. The display shown in Figure 128 appears. Create Output Queue (CRTOUTQ) Type choices, press Enter. Writers to autostart . . . . . . 1 1-10, *NONE Queue for writer messages . . . QSYSOPR Name Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Connection type . . . . . . . . *IP *SNA, *IP, *IPX, *USRDFN Destination type . . . . . . . . *OTHER *OS400, *OS400V2, *PSF2... Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Output Queue (CRTOUTQ) Type choices, press Enter. Writers to autostart . . . . . . *NONE 1-10, *NONE Queue for writer messages . . . QSYSOPR Name Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Connection type . . . . . . . . > *IP *SNA, *IP, *IPX, *USRDFN Destination type . . . . . . . . > *OTHER *OS400, *OS400V2, *PSF2... Host print transform . . . . . . *YES *YES, *NO Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 8. Remote system printing 175 Figure 128. Create Output Queue (Part 5 of 6) 6. On this display, enter the following parameter values: • Manufacturer type and model: Enter a value according your target printer type (in this example, *IBM4317). • Internet address: The IP address of your printer (in this example, 123.1.2.3) Note: The Internet address is only prompted for if *INTNETADR is specified for the remote system. • Destination options: *NONE, see the following section for a discussion of this parameter. • Print separator page: For V3R7 and later, enter *YES or *NO. For V3R1, V3R2, and V3R6, see 8.2.3, “Separator pages” on page 178, for an alternate solution. To continue, press the Page Down key to view the display shown in Figure 129. Figure 129. Create Output Queue (Part 6 of 6) Create Output Queue (CRTOUTQ) Type choices, press Enter. Writers to autostart . . . . . . *NONE 1-10, *NONE Queue for writer messages . . . QSYSOPR Name Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Connection type . . . . . . . . > *IP *SNA, *IP, *IPX, *USRDFN Destination type . . . . . . . . > *OTHER *OS400, *OS400V2, *PSF2... Host print transform . . . . . . *YES *YES, *NO Manufacturer type and model . . *IBM4317 Workstation customizing object *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Internet address . . . . . . . . '123.1.2.3' Destination options . . . . . . *NONE Print separator page . . . . . . *YES *YES, *NO More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Output Queue (CRTOUTQ) User defined option . . . . . . *NONE Option, *NONE + for more values Type choices, press Enter. User defined object: Object . . . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . Name, *LIBL, *CURLIB Object type . . . . . . . . . *DTAARA, *DTAQ, *FILE... User driver program . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Text 'description' . . . . . . . Remote output queue for 4317 Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 176 IBM AS/400 Printing V 7. Enter a text description for your output queue (in this example, “Remote output queue for 4317”) and leave the default values for the other parameters (these parameters are V3R7 only). 8. Press the Enter key to create the output queue. 9. When the configuration is completed, complete the following steps: a. Test the TCP/IP connection, using the PING command, with the IP address of your printer. b. If the PING is successful: • Start the remote writer: STRRMTWTR OUTQ(outputq_name) • Print something (for example, a print screen). c. If either the PING fails or you are unable to print, then you are in troubleshooting mode (see 12.1, “Communication, connection, and configuration problems” on page 253). 8.2.2 Destination options When CNNTYPE(*IP) is specified, destination-dependent options are added to the control file that is sent to the LPD server. When CNNTYPE(*IPX) is specified, this field is used to determine how spooled files are handled once they are sent to the remote system. The destination options are up to 128 characters of filters and predefined options enclosed in apostrophes. The options are separated by one or more blanks. Note: Anything that is not recognized as a filter, a predefined option, or a reserved character is passed to the remote system. The following predefined options apply to processing by LPR under OS/400 and are specified in the DESTOPT parameter: • *USRDFNTXT This predefined option sends the current user-defined text of the user profile as options to the remote system. The user-defined text of the user profile can be set using the system application program interface (API) CHGUSRPRTI. The text can be displayed using the system API DSPUSRPRTI or by displaying the spooled file attributes. • *NOWAIT This option is only valid if the connection type *IPX is used. • J This option overrides the default job name for the banner page printed on the remote system, if a banner page is printed at all. The characters immediately following the “J” are used as the job name. For example, to specify a job name of “Id12”, specify: DESTOPT('JId12') • XAIX This option is used in the TCP/IP environment only. This option tells the local AS/400 system how to produce multiple copies on the remote system. Chapter 8. Remote system printing 177 If “XAIX” is not specified (the default), one print command per copy requested is placed in the control file. This control file and a single copy of the data is then sent to the remote system. However, some remote systems (similar to most direct LAN attached printers) ignore multiple print commands within the control file. Therefore, the other method might be preferred. If “XAIX” is specified, OS/400 places one print command in the control file and sends it together with the data multiple times to the remote system, depending on the Number of copies parameter of the spooled filed attributes. Note: If the XAIX option has been specified, but the LPD does not support this method, message TCP3701 (Send request failed for spooled file) is returned. However, one copy may still print, depending on the LPD implementation. When the send request fails, the remote writer will try sending again, continuing until the spooled file is held. • XAUTOQ If the connection to the remote system times out during transformation of the spooled file into ASCII by the host print transform, with this option, the transformed spooled file is sent back to the same output queue using the AS/400 LPD server rather than failing with a timeout error. The transformed spooled file name is modified to be unique. Then, since the spooled file is already in ASCII, it is sent directly to the target printer without any transformation and avoids a timeout. Note: When using TCP/IP LPR-LPD and the host print transform function, we recommend that the subsystem QSPL has a minimum size of 6 MB. To implement this function, you need to specify the new DESTOPT parameter, XAUTOQ. On the LPR, or the SNDTCPSPLF, CRTOUTQ, or CHGOUTQ command, the parameter is capitalized. The AS/400 LPD server must also be running. Check for server jobs with the command: WRKACTJOB SBS(QSYSWRK) JOB(QTLPD*) This displays all LPD servers started with names QTLPDnnnnn, where nnnnn are unique identifying numbers. If no servers are displayed or you want to start an additional server, use the command: STRTCPSVR SERVER(*LPD) The number of servers started is determined by the TCP/IP configuration. Starting multiple servers increases their availability since each processes one job at a time. Messages are logged to indicate whether auto-queueing is needed. When the following messages are received, the remote system times out and the transformed spooled file is sent to the same output queue: TCP342F Remote host system closed connection unexpectedly. TCP3600 Spooled file sent. When the following messages are received, the remote system times out and the AS/400 LPD server is not available to receive the transformed spooled file: TCP342F Remote host system closed connection unexpectedly. TCP3701 Send request failed for spooled file. When the transformed spooled file is sent to the original output queue, the spooled file name is changed to LPDzzzz to indicate that it was received by LPD. zzzz indicates identifying alphanumeric characters. The job name is 178 IBM AS/400 Printing V changed to QPRTJOB, and the user data is set to the original file name. The job number and spooled file number are determined from the LPD server. The original spooled file can be kept or deleted. This occurs after the file has been sent, even if it is sent back to the original queue. If the transformed file is sent to the original queue, it is deleted after it has been sent to the remote system. Most of the commands and filters of the line printer daemon protocol can be specified in the DESTOPT parameter, but some are reserved for use by LPR. These exceptions are: • Supported print filters: Any option starting with one of the following characters is interpreted as a print filter. This character is built into the print command sent to LPD in the control file. The filter is for use by the LPD daemon to modify the printed output, but how this is used depends on the LPD implementation. c, d, f, g, l, n, p, r, t, and v The meaning of some flags are: – f: Print file as plain text. Many ASCII control characters may be discarded (except BS, CR, HT, FF, and LF). – l: This flag is the default. It keeps all ASCII control characters. – p: This filter causes the file to be printed in “pr” format. It prints headings (date, time, title, etc.) and page numbers. • Reserved characters: There are also some reserved characters. These character are used by the SNDTCPSPLF command for the control file. An option must not start with one of the following characters: K, C, H, I, L, M, N, P, S, T, U, W, 1, 2, 3, and 4 For example, CLASS=ASCII is not allowed because the character “C” is reserved for use by the SNDTCPSPLF command. However, “-CLASS=ASCII” is permitted. The meaning of some characters are: – H: Name of the sending host, set by LPR to the AS/400 configured name. – L: Print banner page command, added by LPR if print separator page *YES is specified (default). – M: Send mail to a given user ID when printed (not supported). The meaning of the previous commands and filters can be found in the line printer daemon protocol reference documentation. This documentation has no IBM form number, but can be found in several places on the Internet. 8.2.3 Separator pages The Print separator page parameter is only available on V3R7 and later. If you are running V3R1, V3R2, or V3R6, you can suppress the separator page by creating data area QTMPLPR of type *CHAR in library QTCP. Specify an authority of *USE to prevent normal users from changing the data area: CRTDTAARA DTAARA(QTCP/QTMPLPR) TYPE(*CHAR) AUT(*USE) Chapter 8. Remote system printing 179 Note: This task must be performed by someone who has at least *CHANGE authority to the QTCP library. The option to omit the separator page request is turned on or off based on the value of the first character in the data area. If this character is a capital N, the option is enabled. If it is any other character, the option is disabled. If the data area does not exist, the option is disabled. • To enable (suppress the separator page) enter: CHGDTAARA DTAARA(QTCP/QTMPLPR (1 1)) VALUE('N') • To disable (print the separator page) enter: CHGDTAARA DTAARA(QTCP/QTMPLPR (1 1)) VALUE(' ') 8.2.4 ‘Load Letter’ message on the printer If the host print transform function is used and if the page size parameter in your printer file does not match a page size entry in the MFRTYPMDL (Manufacturer type, and mode) or WSCST (Workstation Customizing) object, Letter format is used as the default format. In this case, if the printer is loaded with a paper format other than Letter, the message “Load Letter” may be displayed on your printer. This problem occurs especially when using an A4 paper format. To circumvent the problem, complete the following steps according to your OS/400 version and release, substituting your own values for the various parameters: 8.2.4.1 For V3R1 and V3R6 Follow these steps: 1. To retrieve the workstation customized object, type the following command: RTVWSCST DEVTYPE(*TRANSFORM) MFRTYPMDL(*IBM4317) SRCMBR(NP17SRC) SRCFILE(QGPL/QTXTSRC) Note: For the MFRTYPMDL parameter, enter a value depending on your target printer (in this example, *IBM4317), and use your own values for SRCMBR and SRCFILE. 2. Use SEU to edit the source file: STRSEU SRCFILE(QGPL/QTXTSRC) SRCMBR(NP17SRC) 3. Page through the source file until you find the following tag (around statement 0001.67): :PAGESIZE PAGWTH=12240 PAGLEN=15840 DATA='1B266C303241'X. 4. Change the escape sequence in the DATA parameter to: DATA='1B266C323641'X. Note: This changes the value Letter ('3032') to A4 ('3236'). 5. Exit the SEU source edit (press F3 and Enter). 6. To create a customized workstation configuration object, type the following command: CRTWSCST WSCST(QGPL/NP17A4) SRCMBR(NP17SRC) 180 IBM AS/400 Printing V You will receive the message “Customization object NP17A4 created successfully”. 7. Stop the remote writer: ENDWTR WTR(outputq_name) OPTION(*IMMED) 8. To change the output queue, enter the CHGOUTQ command, and press the F4 (Prompt) function key. Then page down until you find the parameters shown in Figure 130. Figure 130. Change Output Queue: HPT and WSCST parameter On this display, enter the following parameter values: • Manufacturer type and model: *WSCST • Workstation customizing object: NP17A4 (the object that you created with the command CRTWSCST) • Library: QGPL (the library specified in the CRTWSCST command) 9. Press the Enter key to modify your output queue. 8.2.4.2 For V3R2, V3R7, and later Follow these steps: 1. To retrieve the workstation customized object, type the following command: RTVWSCST DEVTYPE(*TRANSFORM) MFRTYPMDL(*IBM4317) SRCMBR(NP17SRC) SRCFILE(QGPL/QTXTSRC) Note: For the MFRTYPMDL parameter, enter a value depending on your target printer (in this example, *IBM4317), and use your own values for SRCMBR and SRCFILE. 2. Create a customized workstation configuration object: CRTWSCST WSCST(QGPL/NP4317) SRCMBR(NP17SRC) You will receive the message “Customization object NP4317 created successfully”. 3. Stop the remote writer: ENDWTR WTR(outputq_name OPTION(*IMMED) 4. To change the output queue, enter the CHGOUTQ command, and press the F4 (Prompt) function key. Then page down until you find the parameters shown in Figure 131. Change Output Queue (CHGOUTQ) Type choices, press Enter. ......................... ....... ........ ......................... ....... ........ Host print transform . . . . . . *YES *YES, *NO Manufacturer type and model . . *WSCST Workstation customizing object NP17A4 Name, *NONE Library . . . . . . . . . . . QGPL Name, *LIBL, *CURLIB ......................... ....... ........ ......................... ....... ........ F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 8. Remote system printing 181 Figure 131. Change Output Queue: HPT and WSCST parameter On this display, enter the following parameter values: • Manufacturer type and model: *WSCSTA4 • Workstation customizing object: NP4317 (the object that you created with the command CRTWSCST) • Library: QGPL (the library specified in the CRTWSCST command) 5. Press the Enter key to modify your output queue. 8.3 AS/400 and NetWare printing Beginning with Version 3.0 Release 7.0 of OS/400, remote system printing can now send spooled files to a NetWare server using the Internetwork Packet Exchange (IPX) protocol. The NetWare server can be either on the Integrated PC Server or a PC. When you have the Enhanced Integration for NetWare feature (an optional part of OS/400 (5716-SS1 for V3R7 or 5769-SS1 for V4R1 and V4R2)), you can print from the AS/400 system to NetWare printers that use the standard NetWare print support. NetWare uses a print queue, a print server, and a printer to allow a workstation to print to a network printer. The print queue is the object that temporarily holds the print job file until the job is printed. See Figure 132 for an illustration of the AS/400 system to NetWare printing process. Figure 132. AS/400 system to NetWare printing As each user's spooled job is processed on the output queue, the AS/400 system authenticates a connection for the user to the appropriate server. Each user must Change Output Queue (CHGOUTQ) Type choices, press Enter. ......................... ....... ........ ......................... ....... ........ Host print transform . . . . . . *YES *YES, *NO Manufacturer type and model . . *WSCSTA4 Workstation customizing object NP4317 Name, *NONE Library . . . . . . . . . . . QGPL Name, *LIBL, *CURLIB ......................... ....... ........ ......................... ....... ........ F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys IPX STRRMTWTR to Output Queue RMTNW Printer Remote Output Queue: RMTNW NetWare Server Print Queue Spool File Spool File Spool File Spool File 182 IBM AS/400 Printing V have a NetWare authentication entry or use the Start NetWare Connection (STRNTWCNN) command to start a NetWare connection manually. The Add NetWare Authentication Entry (ADDNTWAUTE) command adds authentication information for a server to a user profile. The information specifies how the user signs on to the server. This information is used to start authenticated connections to servers. An authenticated connection to a server is required to issue requests to the server. If an authenticated connection does not exist, the system attempts to start a connection using data stored in the authentication entries. Note: Ideally, each user has an authentication entry authorizing them to the specified NetWare print queue. If users do not have an authentication entry, they must specify AUTJOB(*ANY) on the STRNTWCNN command. 8.3.1 Preparing for remote system printing Preparation work must be done on both the source system (AS/400 system) and target system (NetWare server) for remote system printing to work. The following list shows what must be present or created before remote system printing can be used: • On the AS/400 system Version 3.0 Release 7.0 or later, ensure that the Enhanced Integration for NetWare is installed. • On the AS/400 system, configure and start Internet Packet Exchange (IPX) configuration support. For IPX configuration, see Internet Packet Exchange (IPX) Support, SC41-3400. • On the NetWare Server, load the NetWare Enhanced Integration NLM. The file to be loaded is AS4NW410 for NetWare 4.10, or AS4NW312 for NetWare 3.12 servers. • On the AS/400 system, use the STRNTWCNN AUTJOB(*ANY) command to connect to the NetWare server, or use the ADDNTWAUTE command if you want to start the STRNTWCNN automatically. • On the NetWare server, ensure the NetWare User specified on the STRNTWCNN or ADDNTWAUTE command is a valid NetWare user. • On the AS/400 system, use the CRTOUTQ command to create the remote output queue for NetWare printing. • On the NetWare server, ensure the NetWare queue exists on a volume of a server that runs the NetWare Enhanced Integration NLM. 8.3.2 Creating an output queue To create the remote output queue, type the Create Output Queue (CRTOUTQ) command on any command line, and press the F4 (Prompt) function key. The display shown in Figure 133 appears. Chapter 8. Remote system printing 183 Figure 133. Create Output Queue (Part 1 of 2) On this display, enter the following parameter values: • Output Queue: The name of your output queue (in this example, RMTNTW). • Library: A library name (in this example, MYLIB). • Remote system: For DESTTYPE(*NETWARE3), specify the name of the server for the Remote System parameter value. For DESTTYPE(*NDS), you can specify either the name of the tree or the special value *NWSA for the remote system parameter. If you use *NWSA, the tree name is from DSPNWSA OPTION(*NETWARE). In this example, we use DESTTYPE(*NDS) and the remote system name is the tree name IBM_TREE1. • Remote printer queue: For DESTTYPE(*NETWARE3), specify the name of the server for the Remote Printer Queue parameter value. For DESTTYPE(*NDS), the Remote Printer Queue parameter can be a distinguished name that begins with a period. If the name does not begin with a period, the name is a partial name and is used in conjunction with the NDS context specified in the system network server attributes (DSPNWSA) to form the distinguished name of the NetWare print queue. In this example, we use DESTTYPE(*NDS), and the Remote Printer Queue parameter is a distinguished name that begins with a period (.NTW_QUEUE.ASPRT.NTWHP). To continue, press the Page Down key until the display, like the example shown in Figure 134 on page 184, appears. Create Output Queue (CRTOUTQ) Type choices, press Enter. Output queue . . . . . . . . . . > RMTNTW Name Library . . . . . . . . . . . MYLIB Name, *CURLIB Maximum spooled file size: Number of pages . . . . . . . *NONE Number, *NONE Starting time . . . . . . . . Time Ending time . . . . . . . . . Time + for more values Order of files on queue . . . . *FIFO *FIFO, *JOBNBR Remote system . . . . . . . . . > IBM_TREE1 Remote printer queue . . . . . . .NTW_QUEUE.ASPRT.NTWHP More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 184 IBM AS/400 Printing V Figure 134. Create Output Queue (Part 2 of 2) Complete the parameters as shown in this list: • Writer to autostart: 1 • Connection type: *IPX • Destination type: *NETWARE3 or *NDS - (in this example, *NDS) • Host print transform: *YES • Manufacturer type and model: Enter a value according to your target printer type (in this example, *IBM4039HP) • User-defined option: *NONE, *NOWAIT, *BANNER • *NOWAIT: The spooled file is removed from the AS/400 queue as soon as the entire file is sent to NetWare queue. If you do not select this option, the spooled file remains in the AS/400 output queue until the file is removed from the NetWare queue, which occurs either when the file is printed or when a NetWare utility is used to delete it. • *BANNER='text': Specify up to 12 characters that you want to print on a NetWare banner page. The banner page, which precedes the NetWare print job, also prints the user name. Note: You must type *BANNER in uppercase letters. Enclose the text in single quotes, and make sure there are no spaces before and after the equal sign. Press the Enter key to create the RMTNTW remote output queue. Create Output Queue (CRTOUTQ) Type choices, press Enter. Writers to autostart . . . . . . 1 1-10, *NONE Queue for writer messages . . . QSYSOPR Name Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Connection type . . . . . . . . > *IPX *SNA, *IP, *IPX, *USRDFN Destination type . . . . . . . . > *NDS *OS400, *OS400V2, *PSF2... Host print transform . . . . . . *YES *YES, *NO Manufacturer type and model . . *IBM4039HP Workstation customizing object *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Destination options . . . . . . *NONE User defined option . . . . . . *NONE Option, *SAME, *NONE + for more More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys © Copyright IBM Corp. 2000 185 Chapter 9. Client Access/400 printing This chapter covers printing in the Client Access for Windows 95/NT environment. In this environment, it is possible to print PC application output on an AS/400 printer, AS/400 application output on a PC printer, or, by using a combination of these functions, print PC application output on another PC printer. 9.1 Client Access/400 printing overview The ability to use 5250 printer emulation over native TCP/IP connections was introduced with Client Access for Windows 95/NT Version 3 Release 1 Modification 3 when OS/400 Version 4 Release 2 was available. When using AS/400 Client Access for your printing needs, two different types of printing capabilities are provided: • Printing PC application output to SCS, IPDS, or ASCII printers attached to the AS/400 system: This function is called Network Printing (previously called Virtual Print). It allows PC users to identify AS/400-attached printers as their network attached printer. Client Access/400 provides SCS and AFP Printer Drivers, which convert PC application output from ASCII to EBCDIC if the target printer is an SCS or IPDS printer. This conversion occurs on the PC before the spooled file is placed in an AS/400 output queue. Note: The application output type also determines which driver, SCS or AFP, can be used. Windows drivers have to be used if the target printer is an ASCII printer. In this case, the spooled file in the AS/400 output queue is shown with a *USERASCII Device Type (DEVTYPE) attribute. • Printing AS/400 application output on a PC-attached printer: In this case, AS/400 spooled files in an SCS or an AFP data stream must be converted into an ASCII printer data stream depending on the target PC printer. This conversion can be done by one the following methods: – PC5250 emulation based on a Windows printer driver: The transformation takes place on the PC, and only SCS spooled files can be converted. No customization is possible. – PC5250 emulation using Printer Definition Tables (PDT): The transformation takes place on the PC, and only SCS spooled files can be converted. Printer functions can be adapted by modifying the Printer Definition Table (PDT). The modified PDT must be available on all PCs using the printer. – OS/400 host print transform: The transformation takes place on the AS/400 system. SCS and AFPDS spooled files can be converted. Customization is possible by modifying the Work Station Customizing (WSCST) object. The same WSCST object is used for all printers of a similar type. 186 IBM AS/400 Printing V Note: For detailed information on host print transform, see Chapter 6, “Host print transform” on page 137. Redirecting PC application output via the AS/400 system to another PC printer in the network is a combination of the previous two capabilities. PC-generated output is sent to an AS/400 output queue in an ASCII printer data stream and then printed on a Client Access/400 attached ASCII printer. This brings the AS/400 spooling capabilities to PC application output. 9.2 Client Access/400 Network Printing The Client Access/400 Network Printing (previously named virtual print) function allows you to print from a PC application to a printer attached somewhere in the network that is defined to an AS/400 system. The following examples of AS/400-attached printers can be used as target printers: • SCS printers, twinax attached • IPDS printers, configured AFP(*NO) or AFP(*YES), twinax or LAN attached • ASCII printers, attached to PCs, displays, or LAN attached For more information on printer attachment methods, see 1.4, “AS/400 printer attachment methods” on page 15. 9.2.1 Configuring an AS/400 printer to Windows 95 This example shows all the necessary steps to configure an AS/400-attached printer to Client Access for Windows 95/NT. Windows 95 was used for this example. 1. Start the Add Printer wizard. The wizard can be started in several ways, for example: • Open the folder My Computer->Printers, and double-click the Add Printer icon. • Click Start->Settings->Printers, and double-click the Add Printer icon. 2. The Add Printer Wizard window is shown. On this window, click Next. The window shown in Figure 135 appears. Chapter 9. Client Access/400 printing 187 Figure 135. Defining the attachment method of the printer 3. Click the Network printer radio button, and then click Next. The window shown in Figure 136 appears. Figure 136. Network path or queue name 4. Click Browse to find the AS/400 system to which the printer is attached. The window shown in Figure 137 on page 188 appears. 188 IBM AS/400 Printing V Figure 137. Browse for Printer (Part 1 of 2) 5. On the Browse for Printer window, select the AS/400 system by clicking the + (plus) sign. The list of the printers attached to the selected AS/400 system as shown in Figure 138 appears. Figure 138. Browse for Printer (Part 2 of 2) 6. Select the printer you want to use, and click OK. Chapter 9. Client Access/400 printing 189 Note: Instead of browsing the network, you can directly enter the network path or queue name. In this case, enter on of the following options: \\Systemname\Printername \\Systemname\Printername;Profilename \\Systemname\/OutqueueLibraryname/Outqueue \\Systemname\/OutqueueLibraryname/Outqueue;Profilename 7. The Add Printer Wizard window (Figure 139) shows the path to the printer. Click Next. Note: If you do not need to print from DOS-based programs, click Next. Otherwise, click Yes. For the Capture Printer Port, select the LPT port. Then, click OK and Next. This is only necessary when a PC application cannot print directly to a Windows 95/NT printer driver. Figure 139. Path to the printer 8. The window now lets you choose the manufacturer, type, and model of the printer (Figure 140 on page 190). When selected, click Next. Note: These drivers need to be installed in Client Access. If they are not there, use Selective Install via Client Access to install them. 190 IBM AS/400 Printing V Figure 140. Manufacturer and model of the printer 9. On the window shown in Figure 141, confirm the supplied printer name or change it. Also, specify if you want to use it as default printer for the Windows applications. The default value is “No”. Then click Next. Figure 141. Installing as a default printer 10.The Add Printer Wizard window displayed allows you to print a test page on the selected printer. To print it, click Finish. 11.Then you see a window asking you if the test page printed correctly. Click Yes or No depending on the output received. This ends the configuration. Chapter 9. Client Access/400 printing 191 9.2.2 Network printer setup Once you have installed a network printer with the default options, you may need to configure it further. The following example was performed using Client Access/400 for Windows 95/NT Version 3 Release 1 Modification 3 with Windows 95: 1. Right-click the printer icon, and select Properties from the pop-up menu. 2. On the General page, you can enter a comment that is visible when you share the printer with other users and when they set up your printer. You can also print a test page from there. 3. The Details page of the printer properties notebook (Figure 142) is mainly used to select a driver. Choose or configure the port, and set the spooling options. Figure 142. Printer properties: Details page for Windows 95 4. The last page of the properties notebook is labeled Options (Figure 142). On this page, you can define the AFP driver options. See Chapter 5, “The IBM AFP Printer Driver” on page 117, for detailed information on the AFP driver. 9.2.3 AS/400 print profile The following example, based on Client Access/400 for Windows 95/NT Version 3 Release 1 Modification 3 with Windows 95, shows the steps required to add or change an AS/400 print profile: 1. Select the Client Access icon, and then the Client Access Properties icon. On the Client Access Properties window, select Printer profiles. 2. On the Printer Profiles windows, you can add a new profile or modify an existing one (for example, the Default AS/400 Print Profile). 192 IBM AS/400 Printing V Click Add, or select an existing profile and click Change. In this case, the window shown in Figure 143 appears. Figure 143. Adding an AS/400 print profile 3. On the Change or Add AS/400 Print Profile, you can specify the following values: a. Type a descriptive name for your profile if you are in the Add AS/400 Print Profile window. b. The Type of data parameter allows you to specify in which data stream the data is sent to the AS/400 system. You can select one of the following options: • Auto-select: The data type is automatically selected. • Use printer file: The data type specified in the DEVTYPE (Device Type) parameter of the default or user-specified AS/400 printer file is used. In this case, the DEVTYPE parameter must be *SCS, *AFPDS, or *USERASCII. • SCS: A spooled file of type *SCS (SNA Character String) is generated. • AFPDS: A spooled file of type *AFPDS (Advanced Function Printing Data Stream) is generated. • User ASCII: A spooled file of type *USERASCII (User ASCII) is generated. Chapter 9. Client Access/400 printing 193 Considerations on data type selection • If your target printer is an ASCII printer, specify User ASCII. This will avoid any further transformations. • If the application output is graphical (such as output from Microsoft Word, AmiPro, Freelance) and must be printed on an IPDS printer configured AFP(*YES), specify data type AFPDS. Note: Even if host print transform can transform AFPDS to ASCII, specify User ASCII if the target printer is an ASCII printer. • If the application output is text only, specify data type SCS if the target printer is SCS or IPDS configured AFP (*YES) or (*NO). Note: Even if host print transform can transform SCS to ASCII, specify User ASCII if the target printer is an ASCII printer. Table 17 may help you choose the correct type of data. Table 17. Recommended data types and drivers c. Select the Transform ASCII to SCS box if you have a file containing ASCII data that you want to print on an SCS printer. The Transform ASCII to SCS option is a simple ASCII to EBCDIC conversion with some basic SCS commands such as carriage return and line feed. It was designed to print text and cannot handle graphics. d. The printer file used on the AS/400 system can be specified or changed. You can use the Browse button to search the AS/400 system for the printer file. 9.2.4 Considerations on Client Access/400 Network Printing Redirecting printed output from PC applications to the AS/400 system has a number of benefits for PC users: • The ability to use powerful OS/400 spool management functions such as printing a page range or saving the spooled file after printing. • Use of powerful high speed printers, including IPDS printers with full error recovery functions to avoid data loss. • Producing output in the device independent AFP data stream for printing and archiving. • Using standard company wide AFP resources such as overlays and page segments. Output type Target printer Type of data (AS/400 print profile) Printer driver (properties) Text or Graphics ASCII printer User ASCII Windows printer driver Text SCS or IPDS AFP(*YES) or (*NO) SCS IBM SCS xxxx Driver Graphics IPDS AFP (*YES) AFPDS IBM AFP xxxx Driver 194 IBM AS/400 Printing V 9.3 Printing AS/400 output on a PC printer An AS/400 application generates an SCS, IPDS, or AFPDS data stream for printing. Because PC connected printers are ASCII printers that support data streams such as PPDS, PCL/3 or PCL/5, the spooled files produced by AS/400 applications have to be transformed to the appropriate data stream for the PC printer. Note: AS/400 IPDS spooled files cannot be transformed into ASCII. There are three ways to achieve this conversion: • OS/400 host print transform: The transformation takes place on the AS/400 system; SCS and AFPDS spooled files can be converted. Customization is possible by modifying the Workstation Customizing (WSCST) object. The same WSCST object is used for all printers of a similar type. • PC5250 emulation based on a Windows printer driver: The transformation takes place on the PC, and only SCS spooled files can be converted. No customization is possible. • PC5250 emulation using Printer Definition Tables (PDT): The transformation takes place on the PC, and only SCS spooled files can be converted. Printer functions can be adapted by modifying the Printer Definition Table (PDT). The modified PDT must be available on all PCs using the printer. The following sections include configuration examples. The environment we used was: • Windows 95 • Client Access for Windows 95/NT Version 3 Release 1 Modification 3 • OS/400 Version 4 Release 2 • Automatic configuration of devices on the AS/400 system were turned on. Using automatic device configuration, the printer device description is based on the configuration on the PC. Any changes made to the device description manually on the AS/400 system are overwritten when the session is started on the PC. Parameters not sent by the PC are kept, by the host, such as Image Object name. This is a reason to use a unique workstation ID, which is also the device description file name. Note: Before a printer emulation session can be configured, at least one printer must be defined to Windows. 9.3.1 Configuring a printer emulation session This example describes how to configure a printer emulation session for use with or without the host print transform function. It assumes that you have already configured a Client Access connection to the AS/400 system via SNA or TCP/IP. Client Access Version 3 Release 1 Modification 3 allows native TCP/IP printer emulation sessions in addition to SNA. 1. Start the configuration program by selecting the Start or Configure Session icon in the Client Access Accessories folder. Chapter 9. Client Access/400 printing 195 A welcome window is shown. Click OK. A window appears like the one shown in Figure 144. Figure 144. Configuring IBM Personal Communications 5250 2. On the Configure PC5250 window, complete these steps: a. Select the AS/400 system. b. Select the Printer option for Type of emulation. c. A name for the printer (workstation ID) can be given. This name appears on the AS/400 system as the printer device name and output queue name. Note: If this field is left blank, an ID based on the currently active session number is given when establishing the session. d. Select the Host code-page used. This information is used to transform the EBCDIC characters sent from the AS/400 system to the corresponding ASCII code points. e. Click Setup to continue. A window appears like the example shown in Figure 145 on page 196. 196 IBM AS/400 Printing V Figure 145. Printer emulation setup with host print transform 3. On the PC5250 Printer Emulation Setup window, specify: a. The AS/400 message queue to be used with the library name. b. The Courier 10 font is used if FONT(*DEVD) is specified. c. If you want to have the data stream conversion done by PC5250 rather than by the OS/400 host print transform function, skip substeps d through h. d. Select the Transform print data to ASCII on AS/400 box. e. Specify a printer model. f. Specify a paper size for drawer 1, drawer 2, and envelope hopper. g. Do not select Printer supports ASCII code page 899 (this code page is not standard on ASCII printers, and usually requires a special font cartridge). h. Leave the default value for Customizing Object and Library. This results in a device description on the AS/400 system with the following parameters values: DEVD PRT22 DEVCLS *VRT TYPE 3812 AFP *NO CTL QVIRCD0001 FONT 011 TRANSFORM *YES MFRTYPMDL *IBM4039HP WSCST *NONE Chapter 9. Client Access/400 printing 197 If the use of a workstation customization table is required, select Other Printer as the Printer model and specify the name of your customized WSCST object in the Customizing Object field and the library name. This results in the following parameter values in the printer device description: DEVD PRT22 DEVCLS *VRT TYPE 3812 AFP *NO CTL QVIRCD0001 FONT 011 TRANSFORM *YES MFRTYPMDL *WSCST WSCST MYWSCST <--- The name of your customized object *LIBL Note: If both WSCST and a printer model are specified, the workstation customizing object is ignored. We recently changed this to allow any of the WSCSTLETTER and other WSCST* objects, which indicate the paper type to be used with the Customization Object specified. The *OTHER object is no longer allowed. 4. Click OK to return to the previous window. 5. Click OK to start the printer emulation session. Windows appear, such as the examples shown in Figure 146. The printer can be started or stopped from this window. Figure 146. Printer session window 6. To save the session and create an icon, click File and Save as.... The window shown in Figure 147 on page 198 appears. 198 IBM AS/400 Printing V Figure 147. Save Workstation Profile as 7. Enter a name for the configuration file (in this example, PRT22), and click OK. The window shown in Figure 148 appears. Figure 148. Create printer session icon 8. Click Yes to create an icon for the printer session. The Browse for Folder window shown in Figure 149 appears. Chapter 9. Client Access/400 printing 199 Figure 149. Selecting a destination for the icon 9. Click the destination (folder/desktop) where the icon should be placed, and click OK. The window shown in Figure 150 appears. Figure 150. Icon information 10.Click OK to create the icon. 11.Now a printer defined to Windows can be connected to this session. Click File and Printer Setup from the printer emulation session window. The window shown in Figure 151 on page 200 appears. 200 IBM AS/400 Printing V Figure 151. Selecting a printer 12.Click the printer you want to use, and then click OK to end the configuration. Note: If no printer is connected with a printer emulation session, the Windows default printer is used, and the window shown in Figure 152 appears. Figure 152. Using the default printer 9.3.2 Modifying and using a printer definition table (PDT) This example assumes that a printer emulation session is already configured and working. For a description of how to configure a printer emulation session, follow the instructions in 9.3.1, “Configuring a printer emulation session” on page 194, and do not specify host print transform. Printer definition tables (PDTs) can be used to override host formatting (done through the SCS commands), or to initialize the printer independent of the SCS formatting. The steps to modify one are: 1. Create or change a printer definition file (PDF). 2. Convert the printer definition file to a printer definition table. PDFs can be modified with any editor on your PC. They consist of macro definitions that specify how to convert the SCS code to ASCII strings. Many PDFs and PDTs come with Client Access. More details and a list of functions available can be found in the Client Access/400 Personal Communications 5250 Reference Guide, SC41-3553. Chapter 9. Client Access/400 printing 201 The following example shows how to change a PDF, create the PDT, and how to configure the PC5250 session to use the PDT: 1. Select a printer definition file for modifying. In most cases, an existing PDF is selected for modification. The PDF, which is closest to the functionality of the physical printer used, should be copied and then edited. In this example, the HPLJ4.PDF file has been copied to the I4039HP.PDF file. The path for the PDF files for a default Client Access installation is: \program files\ibm\client access\emulator\pdfpdt You can search for the PDF files in a separate subdirectory named PDFPDT. Figure 153. I4039HP.PDF table (partial) /**********************************************************************/ /* */ /* PRINTER SESSION DEFINITION FILE FOR: HP LaserJet 4 */ /* */ /**********************************************************************/ BEGIN_MACROS NUL EQU 00 /* Nul character */ BAK EQU 08 /* Back Space */ TAB EQU 09 /* Tab */ LFF EQU 0A /* Line Feed */ FFF EQU 0C /* Form Feed */ CRR EQU 0D /* Carriage Return */ P12 EQU 1B 26 6B 34 53 /* 12 Pitch-Characters/Inch */ P10 EQU 1B 26 6B 30 53 /* 10 Pitch-Characters/Inch */ ESC EQU 1B /* Escape */ SPA EQU 20 /* Space */ P17 EQU 1B 26 6B 32 53 /* 16.7 Pitch-Characters/inch */ CS1 EQU 1B 28 38 55 /* Roman 8 char set 1 */ CS2 EQU 1B 29 38 55 /* Roman 8 char set 2 */ EC1 EQU 1B 28 35 4D /* PS Math Symbol Set */ EC2 EQU 1B 29 30 4E /* ECMA-94 Latin 1 char set 2 */ PC1 EQU 1B 28 30 4E /* PC-8 (IBM US) char set 1 */ PC2 EQU 1B 29 30 4E /* PC-8 (IBM US) char set 2 */ ............................................ ............................................ NOR EQU 1B 45 /* Normal background-foreground*/ SFG EQU 1B 28 73 /* */ END_MACROS /**********************************************************************/ /* Session Parameters */ /**********************************************************************/ MAXIMUM_PAGE_LENGTH=060 /* Printed lines per page */ MAXIMUM_PRINT_POSITION=080 /* Printed characters per line */ INTERV_REQ_TIMER=001 HORIZONTAL_PEL=0720 /* */ ............................................ ............................................ MIDDLE_DOT_ACCENT = B7 ONE_SUPERSCRIPT = B9 NUMBER_SIGN = 70 THREE_SUPERSCRIPT = B3 TWO_SUPERSCRIPT = B2 REQUIRED_SPACE = 20 /**********************************************************************/ /* Internal Data Area. */ /* Do not change these statement. */ /**********************************************************************/ PRINTER_ID=99 99 /**********************************************************************/ /* End of Definition File */ /**********************************************************************/ 202 IBM AS/400 Printing V In the example shown in Figure 153, we made two changes to the PDF: • The following line: EC1 EQU 1B 28 30 4E /* ECMA-94 Latin 1 char set 1 */ has been changed to: EC1 EQU 1B 28 35 4D /* PS Math Symbol Set */ to use the PS Math Symbol Set instead of the Latin 1 Symbol Set. • The following entry: NUMBER_SIGN=23 has been changed to: NUMBER_SIGN=70 to print the mathematical symbol “Pi” instead of the number sign. 2. Convert the PDF to a PDT Select File from the pull-down menu of the printer emulation session. Then select Printer Setup, and the window shown in Figure 154 appears. Note: Converting a PDF to a PDT can be done from any emulation window. In this example, we are going to use the converted PDT with the printer emulation session, so we do the conversion from that emulation session. Figure 154. Printer Setup window 3. Select the Use PDT box, and click Select PDT.... The Select PDT file window shown in Figure 155 appears. Chapter 9. Client Access/400 printing 203 Figure 155. Select PDT file 4. Click Convert PDF..., and the Convert PDF to PDT window shown in Figure 156 appears. Figure 156. Convert PDF to PDT 5. Select the modified PDF, and click Convert. The PDF File Converter window shown in Figure 157 on page 204 appears. 204 IBM AS/400 Printing V Figure 157. PDF File Converter 6. If compilation is successful, click Close, and the Convert PDF to PDT window is shown again. 7. On the Convert PDF to PDT window, click Close, and the Select PDT File is shown. The converted PDT is highlighted. Click OK, and the Printer Setup window is shown. 8. On the Printer Setup window, click OK to end the configuration. Note: It is not necessary to restart the session with the AS/400 system. The newly converted PDT takes effect immediately. © Copyright IBM Corp. 2000 205 Chapter 10. IBM AS/400 network printers There is a wide range of IBM AS/400 network laser printers. The current printer line includes: • IBM Network Printer 12 • IBM Network Printer 17 • IBM Infoprint 20 • IBM Infoprint 21 • IBM Infoprint 32 • IBM Infoprint 40 • IBM Infoprint Color 8 This chapter explains how you can maximize printer effectiveness when it is attached to an AS/400 system. IBM Network Printer 17 was used for this illustration, but the highlighted features generally apply to all the monochrome network printers. Note: For the latest setup and configuration reference, click the Publications link at: http://www.ibm.com/printers 10.1 Overview There are a number of shared characteristics that make IBM AS/400 network printers a good choice for AS/400 and network environments, including: • 600 and 1200 dpi resolutions • Multiple active physical attachments • Data stream auto-sensing • Writer sharing to switch between network clients and servers, and AS/400 writers The newest member of the IBM AS/400 network printer family is IBM Infoprint 21. This printer adds several important additional features that are key to printing in a network environment, including: • It supports Internet Printing Protocol (IPP), which enables you to reference and print to a printer via a URL. • An embedded Web server within the printer enables access to the printer from any Web client. This provides the capability to view printer information and to manage the printer directly from any Web browser. • IBM Homerun printer controller provides the capabilities of the Advanced Function Common Controller (AFCC) used in much larger IBM AS/400 printers. IBM AS/400 network printers make ideal workgroup, distributed, or small system printers within AS/400 network environments. An overview of the principal supported attachments, protocols, and data streams is shown in Figure 158 on page 206. Although they may be attached using conventional means, such as twinaxial cable or parallel cable, their greatest flexibility is realized when they are TCP/IP LAN-attached. When installed on a Token-Ring or Ethernet LAN, they can receive data from a variety of host systems as well as PC clients on the LAN. Network 206 IBM AS/400 Printing V management software, in the form of IBM Network Printer Manager, may be used to monitor and maintain the printers, either across the LAN or through the World Wide Web. This is discussed in 10.4.1, “Network Printer Manager” on page 215. Figure 158. Network printer connectivity 10.2 Configuration scenarios This section outlines simple and advanced uses of network printers. 10.2.1 Example 1: LAN-attached IPDS printer Here an NP17 has been attached to an AS/400 system through Ethernet (Figure 159). The printer is used in the Accounting department of a business, printing variable data with electronic forms (overlays) sent with the data from the AS/400 system. The printer is configured as type *IPDS, AFP=*YES. Protocols Attachments Datastreams Token Ring Ethernet Twinax Coax Parallel Serial IPDS Postscript L2 PCL 5e TCP/IP NetBIOS IPX/SPX AppleTalk Chapter 10. IBM AS/400 network printers 207 Figure 159. LAN-attached Network Printer 17 10.2.2 Example 2: Dual-configuration printer This example shows the same physical printer, but a second logical device has been configured on the AS/400 system (Figure 160). This second device is configured as a LAN-attached ASCII printer and receives only the PCL data stream. The reason this has been done is because the printer is now being used for general purpose office printing such as reports, screen prints, and program listings. Although these can be sent to the IPDS device, it is quicker to send such simple documents using a PCL device description. The printer has been set up to automatically switch between the two operating modes. This is indicated on the printer's operator panel (PCL ETHERNET and IPDS ETHERNET) so users know which particular type of output is being printed. The second device is configured as an emulated 3812 Model 1 with LAN attachment *IP. This configuration is available at Version 3.0 Release 7.0 and later. Prior to this, we can use a remote output queue for a similar effect. These types of configuration are discussed in 1.4, “AS/400 printer attachment methods” on page 15. Figure 160. Single LAN-attached Network Printer 17: Two logical devices 10.2.3 Example 3: Shared dual-configuration printer In this example, in addition to the dual-configuration use made by a single AS/400 system. A second AS/400 system also directly uses the printer. Again, the printer Network Printer 17 IPDS Ethernet AS/400 PCL Network Printer 17 IPDS Ethernet AS/400 208 IBM AS/400 Printing V manages the switching between the two different hosts as it does for switching between data streams (Figure 161). Figure 161. Shared Network Printer 17 10.2.4 Example 4: Shared multi-purpose printer We can continue to extend the versatility of the network printer by adding options such as a Token-Ring adapter, an envelope feeder, two 500-sheet input bins, and an offset stacker/jogger output bin. Users on the Token-Ring LAN can now send PCL or PostScript jobs to the printer, perhaps using the offset stacker for e-mail, spread sheets, and other PC documents. The PCs can be on a Windows NT server in Figure 162, but can also be on an OS/2, Novell NetWare, or Apple network. They might also use the existing Ethernet adapter instead of Token-Ring. Alternative options might be a 10-bin mailbox feature in place of the offset stacker for printing confidential personnel records, a Twinax adapter instead of one of the LAN adapters, or even a higher-throughput NP24 to cope with even more traffic! Figure 162. Shared Network Printer 17 with options These examples show how network printers may be installed in an initially simple manner to satisfy one particular requirement, yet grow with the demands of the enterprise. PCL Network Printer 17 IPDS Ethernet AS/400 AS/400 IPDS Postscript L2 PCL Network Printer 17 IPDS Ethernet AS/400 AS/400 IPDS Token-Ring Windows NT Server Chapter 10. IBM AS/400 network printers 209 10.3 Printer setup Each model of the Network Printer is shipped with the following manuals: • User's Guide • Quick Set-up • Safety Information The publication numbers of the manuals vary by language. In addition, the following publications are shipped when the appropriate attachment options are purchased: • Twinax/Coax Configuration Guide, G544-5241 • Ethernet and Token-Ring Configuration Guide, G544-5240 • Ethernet and Token Ring Configuration Guide (Infoprint 21), S544-5711 The following publications are available for purchase in hardcopy format: • IPDS and SCS Technical Reference, S544-5312 • PCL5e and PostScript Technical Reference, S544-5344 They are also freely available on the World Wide Web at: http://www.printers.ibm.com/manuals.html This redbook is not a substitute for any of these publications. Use the shipped manuals for unpacking and basic setup (for example, installing the toner cartridge, loading paper, and using the operator panel). Use the optional attachment guides to attach the printer to the system. 10.3.1 Printer menu details To print out the Configuration Page for any of the monochrome models, ensure the printer display shows READY. Then press the following keys in sequence: Online->Menu->Item->Enter. If the printer does not show READY, but shows the status of the last job (for example, IPDS ETHERNET), not all the menu printout options are available. The following values are the settings that we recommend for the menus affecting host printing. IBM Network Printer 17 was used as an example. For other models, and for more detailed information, refer to the User's and Configuration Guides. Menu items are only listed here if they relate to host-based printing in some way, either directly or indirectly. 10.3.1.1 TEST MENU Use this menu to print out the Configuration Page (see the preceding paragraphs) as well as listings of IPDS resident fonts. 10.3.1.2 PAPER MENU This controls paper-handling when this is not specified by the host. SOURCE TRAY 1 This is the default tray used when one is not specified in the data stream (for example, when printing a test page). However, if you want to use the auxiliary tray with host jobs, set SOURCE to AUX. This is explained in “Auxiliary tray” on page 213. 210 IBM AS/400 Printing V MANUAL OFF This applies to paper feeding from the auxiliary tray. Set it to OFF (automatic feed) unless you are feeding special stationery, such as stiff card stock and want to feed these singly. AUXSIZE LETTER or A4 or as required You must specify the loaded paper size for the auxiliary tray since this tray does not have a paper size sensor. 10.3.1.3 CONFIG MENU In the case of host printing, this only applies to an SCS data stream. JAMRECOVERY ON 10.3.1.4 TOKEN RING and ETHERNET MENU This menu is only present if a LAN feature (LAN NIC (Network Interface Card)) is installed. PERSONALITY AUTO PORT TIMEOUT 15 Other parameters vary according to your particular requirements (IP address and others). 10.3.1.5 TWINAX SCS MENU This menu is only present if a Twinax feature is installed. CODE PAGE The country code page of your system (037 - U.S., 285 - U.K., for example) 10.3.1.6 TWINAX SETUP MENU This menu is only present if a Twinax feature is installed. SCS ADR An address from 0 to 6 Must be different than the IPDS address. Set this address to OFF if you do not want an SCS-only device description for this printer. IPDS ADR An address from 0 to 6 Must be different than the SCS address. EDGE-EDGE ON For Network Printers 12 and 17 only. This is contrary to the recommendation in the User's Guide, but you have more scope for defining applications that can extend to the edge of the page. Note that the setting in this menu applies to SCS printing only. BUFFERSIZE 1024 This applies to IPDS printing only. The SCS buffer size is always 256 bytes. PORT TIMEOUT 90 Chapter 10. IBM AS/400 network printers 211 10.3.1.7 IPDS MENU This menu is only present if the IPDS feature (IPDS SIMM (Single Inline Memory Module)) is installed. PAGEPROT AUTO DEF CD PAG The country code page of your system (037 - U.S., 285 - U.K., for example) EMULATION 43xx. Set this to native mode (43xx). Ensure you install on your system the program temporary fixes (PTFs) listed in Table 18 on page 212. Operating the printer in 4028 mode affects font substitution and twinax auto-configuration. DEF FGID 416 (or any FGID of your choice) CPI 10.0 (or any CPI to match your FGID choice) VPA CHK ON X OFFSET 0 Y OFFSET 0 PAGE WHOLE This setting is explained in 10.5.5, “Using the IPDS menu PAGE setting” on page 218. EDGE-EDGE ON For Network Printers 12 and 17 only. This is contrary to the recommendation in the User's Guide, but you have more scope for defining applications that can extend to the edge of the page as well as greater compatibility with edge-to-edge printers such as the IBM 3130 and Infoprint 60. See 10.5.6, “Edge-to-edge printing” on page 221, for details. FONT SUB ON Note that the default is OFF. IPDS PORT TRING (if Token-Ring attach), ETHER (if Ethernet attach), or TX (if Twinax attach). If you have both LAN and Twinax adapters on the printer, only one may be active for IPDS support at any one time. This does not depend on the setting of the IPDS port, however. Note: You cannot have a device using two IPDS ports simultaneously. For example, if you have a twinax adapter and a LAN adapter, only one may be active for IPDS at any one time. However, this does not prevent you from configuring both adapters for IPDS use. Despite the setting of this item, the port used for IPDS jobs simply depends on which port is activated first by the STRPRTWTR command (or equivalent command on a non-AS/400 system). You can even share IPDS traffic between the two ports using the PORT TIMEOUT option for each adapter (for example, if the printer is shared by multiple systems, one that uses twinax and the other uses LAN cabling). 212 IBM AS/400 Printing V EARLY COMPL OFF This item only appears if a twinax adapter is also present. If this item is enabled, the printer sends back a good acknowledgement (ACK) when it has received the data, not when it has printed the data. This improves performance, but runs the risk of losing data (for example, through a paper jam). This is how the printer operates in SCS mode in any case, relying on features such as JAMRECOVERY=ON (in the CONFIG MENU) to reprint a page. Using EARLY COMPL=OFF in the IPDS implementation causes the printer not to send a good ACK until the completed output is in the output bin, together with the host IPDS data stream re-transmitting the page if required. Therefore, error recovery is improved. 10.3.2 Recommended PTF levels The PTFs listed in Table 18 provide the correct PSF/400 support for the network printers in native mode (EMULATION set to 4312, 4317, or 4324 in the IPDS MENU). The PTFs also add support for the IBM 4247-001 impact printer. The Ethernet and Token-Ring Configuration Guide, G544-5240, may mention PTF SF33025 for V3R7. This PTF is now in the base operating system. Table 18. PTF support for network printers in native mode (43xx) 10.3.3 Microcode Microcode is the internal machine code that resides on the SIMMs and NICs. The Configuration Page may show code levels with an “R” or an “F” after the numbers. These indicate ROM or Flash SIMMs. It is only possible to download new levels of microcode to Flash SIMMs. If the SIMM type is not indicated, it is usually a Flash SIMM. Customers may upgrade the code levels of LAN NIC cards using Network Printer Manager. Token-Ring and Ethernet microcode are available on the Web at: http://www.printers.ibm.com/util.html A service representative performs other code upgrades. In either case, this should only be done when advised by IBM Technical Support. 10.3.4 Tray and bin selection This explains the settings required to select the auxiliary tray (an input tray) and the mailbox bins and offset stacker (output bins). Note that the terms tray and bin may be used synonymously in the documentation. Version and Release APAR PTF Cumulative pack V3R1 SA52845 SF43120 - V3R2 SA52845 SF43431 7014 V3R6 SA55722 SF42712 - V3R7 and later Base Operating system Chapter 10. IBM AS/400 network printers 213 10.3.4.1 Input trays Input tray selection is outlined in Table 19. Table 19. Input tray selection Auxiliary tray If you want to use the auxiliary tray with SCS jobs, but do not want to change printer files to specify *CUT on the FORMFEED parameter, use the following workaround: 1. Set the source tray in the PAPER MENU to AUX. Note: This also results in the AUX tray being used for test pages and font listings. 2. Set AUXSIZE in the PAPER MENU to the paper size that is used (for example, LETTER or A4). 3. Set MANUAL in the PAPER MENU to OFF. Otherwise, you are prompted to load each piece of paper. 4. Set the IPDSPASTHR parameter in the PSFCFG Object to *NO. 5. In the printer file, specify DRAWER=4 (or any number greater than 3). The printer cannot find drawer 4 so it picks from the default tray defined in step 1. To use this tray with PCL jobs with either an ASCII LAN device description or a remote output queue, the WSCST (Workstation Customizing Object) must be edited. Refer to Chapter 6, “Advanced Host Print Transform Customization” in AS/400 Printing IV, GG24-4389 for details of modifying such an object. The source text you need to edit is: :litdata. :DWRNBR VAROFFSET= 3 VARLEN=0 DRAWER parameter in printer file FORMFEED parameter in printer file Drawer name on printer SCS printing 1 *AUTOCUT 1 2 *AUTOCUT 2 3 *AUTOCUT Auxiliary any *CUT Manual Tray E1 *AUTOCUT Envelope Feeder IPDS and AFP printing 1 *AUTOCUT 1 2 *AUTOCUT 2 3 *AUTOCUT 3 any *CUT Auxiliary Tray (with MANUAL=OFF) any *CUT Auxiliary Tray (with MANUAL=ON) E1 *AUTOCUT Envelope Feeder 214 IBM AS/400 Printing V VARTYPE=CHRHEX :elitdata. DATA ='1B266C3548'X. Change the last line to: DATA ='1B266C3248'X. 10.3.4.2 Output bins Table 20 applies to Network Printer 17 only. These options are available only on this model. Mailbox bins and Offset Stacker/Jogger To send output to these bins, specify the printer file OUTBIN parameter according to Table 20. The Offset Stacker/Jogger and Mailbox are mutually exclusive. If an output bin is selected but not present, the output is sent to the bin indicated by the OUTPUT setting in PAPER MENU. Table 20. Output bin selection Table 21 applies to the NP24 only. The finisher option is available only on this model. 2000-sheet finisher To send output to these bins, specify the printer file OUTBIN parameter according to Table 21. Continuous stacking (or tray linking of the three output bins in the finisher) may be selected through the printer operator panel, as well as in the OUTBIN parameter of the printer file. Do not mix jobs that use continuous stacking and individual output bin selection. The printer will honor the latter. Therefore, jobs might be mixed in the three finisher trays. If you want to use the continuous stacking feature, set this at the printer and leave the printer file OUTBIN parameter at its default value of *DEVD (device default). OUTBIN parameter in printer file Tray name on printer 1 Main output bin 2 Offset stacker/jogger 3 Mailbox Bin 1 4 Mailbox Bin 2 5 Mailbox Bin 3 6 Mailbox Bin 4 7 Mailbox Bin 5 8 Mailbox Bin 6 9 Mailbox Bin 7 10 Mailbox Bin 8 11 Mailbox Bin 9 12 Mailbox Bin 10 Chapter 10. IBM AS/400 network printers 215 If an output bin is selected but not present, the output is sent to the bin indicated by the OUTPUT setting in PAPER MENU. Table 21. Output bin selection 10.4 Attachment information The diagram in Figure 158 on page 206 summarizes the connectivity options for the network printer range. The two main methods of attaching a network printer to the AS/400 system are: • As PCL printers using: – A remote output queue (V3R1/V3R6 and later) – A direct TCP/IP LAN device description (V3R7 and later) – Through a PC, 5250 terminal, or LAN attachment device using host print transform (HPT) • As IPDS or SCS printers using: – Twinax – LAN (using Token-Ring or an Ethernet interface card) For details of these methods, refer to one or more of the following sources: • Chapter 11, “Configuring LAN-attached printers” on page 223, in this publication • Chapter 3, “Attaching to the AS/400 (Twinax)” in Twinax/Coax Configuration Guide, G544-5241 • Chapter 10, “AS/400 to print PCL and PostScript files” and Chapter 11, “AS/400 to print IPDS files” in Ethernet and Token-Ring Guide, G544-5240 10.4.1 Network Printer Manager This software should be regarded as essential for managing and maintaining a network of network printers. Running on a PC client such as OS/2, Windows 95, or Windows NT, it permits remote configuration and management of the network printer range. For full details, refer to the Web site at: http://www.printers.ibm.com/npm.html OUTBIN parameter in printer file Tray name on printer and output orientation 1 Main output bin, face down 2 Side output tray, face up 3 Top tray of Finisher, face down 4 Middle tray of Finisher, face down 5 Bottom tray of Finisher, face down 6 Top tray of Finisher, face up 7 Middle tray of Finisher, face up 8 Bottom tray of Finisher, face up 9 Continuous stacking, face down 216 IBM AS/400 Printing V It may be downloaded free of charge. It is also supplied on the CD-ROM that accompanies the printers. For a system or network administrator, the utility may be used for: • Configuring the printer after basic set-up by the end-user. • Information regarding the printers' configuration such as paper-handling capabilities, installed features, and usage data. • Management of the printer in day-to-day operations, including: – Notifying you of problems as soon as they appear and before they are reported by the user. – Where the problem lies (for example, a cover open or a paper jam). – Advance notice of certain conditions (for example, low toner level). – Remote reset of the printer, if necessary. • Upgrading Token-Ring or Ethernet software remotely “on the fly” (that is, without ending the writer to the printer). The version of Network Printer Manager for the Web may also be used to manage printers that provide standard printer compatibility within network environments (RFC 1759) such as the Hewlett-Packard 5Si and Lexmark Optra N. 10.5 Output presentation This section explains why the presentation of your printed output may vary depending on how the network printer is configured. 10.5.1 IPDS, AFP=*YES This refers to the DEVTYPE and AFP parameters in the printer device description. For this mode, it is important to remember that the physical page size is determined by the printer reporting its loaded paper size back to PSF/400. The logical page size is dictated by the PAGESIZE parameter in the printer file. 10.5.2 IPDS, AFP=*NO This refers to the DEVTYPE and AFP parameters in the printer device description. For this mode, it is important to remember that both the physical and logical page sizes are determined by the page size defined in the spooled file attributes. Therefore, the physical page and the logical page sizes are the same as far as OS/400 is concerned. 10.5.3 SCS mode SCS mode is the operating mode when the device description is an emulated 3812 Model 1. The page size depends on the settings on the printer, together with any changes made by data stream commands such as lines per inch or characters per inch. Such commands override settings made at the printer. Chapter 10. IBM AS/400 network printers 217 10.5.4 Using the QPRTVALS data area A system-wide data area may be set up for your printer writers, if so desired. This supports a number of functions for all *IPDS, AFP=YES printers, not just the network printers. To create the data area, issue the following commands: CRTDTAARA DTAARA(QUSRSYS/QPRTVALS) TYPE(*CHAR) LEN(256) CHGOBJOWN OBJ(QUSRSYS/QPRTVALS) OBJTYPE(*DTAARA) NEWOWN(QSYS) CUROWNAUT(*SAME) GRTOBJAUT OBJ(QUSRSYS/QPRTVALS) OBJTYPE(*DTAARA) USER(*PUBLIC) AUT(*ALL) The first command creates the data area (note that you must create it in library QUSRSYS). The second command assigns ownership of the object to QSYS, and the third command makes it available to all users. The functions provided by QPRTVALS are not available if the latter steps are not performed. You can check the setting of QPRTVALS at any time by typing: DSPDTAARA DTAARA(QUSRSYS/QPRTVALS) The functions are enabled by the character “Y” being present in one of the first six positions of the data area. The available functions are: QPRTVALS Data area Function Position 1 Logical page origin is the same as physical page origin. Position 2 Change rotation of the logical page (on older printers). Position 3 Emulate a 3835-1 unprintable border on a 3835-2 printer. Position 4 Do not move overlays with front and back margins. Position 5 Increase the *COR top margin. Position 6 Use scalable fonts for MULTIUP and COR. Most of the settings for QPRTVALS are covered in Chapter 5, “AS/400 Printing Enhancements” in AS/400 Printing IV, GG24-4389. 10.5.4.1 Logical and physical page origin With a printer configured as *IPDS, AFP=*YES, the physical page size is returned to PSF/400 by the printer, including dimensions of any unprintable borders. PSF/400 offsets the logical page onto the physical page, taking into account the unprintable border. This function of QPRTVALS puts the logical page back on top of the physical page origin again. If you are designing new applications and you can place all of your data in the printable area, we advise that you map the logical page origin to the physical page origin using this function of QPRTVALS so that output from your new applications is aligned correctly, whether you print to printers with or without an unprintable area. This is also the output presentation seen on a printer configured as *IPDS, AFP=*NO. To activate this function, ensure the printer writer is ended and type: CHGDTAARA DTAARA(QUSRSYS/QPRTVALS (1 1)) VALUE('Y') 218 IBM AS/400 Printing V This places the character “Y” in the first byte of the data area. Then restart the print writer. 10.5.4.2 Increased COR top margin This is one of the few changes you can make to the COR facility. COR is used when the Page Rotation parameter in the printer file is set to *COR, and is frequently invoked when the rotation is *AUTO. System-supplied printer files default to *AUTO. *COR presents your output on a logical page size of 11 inches wide by 8.5 inches deep (that is, in landscape orientation). It also increases the character-per-inch value (for example, from 10 or 12 cpi to 13.3 or 15 cpi). When printing on punched paper, the top margin of 0.5 inches may not be enough for the text to clear the holes. You can increase the margin to 0.75 inches by using this part of QPRTVALS. Note that the lines-per-inch (LPI) value is also slightly increased, compressing the lines of output slightly. Position 5 for QPRTVALS also works if the logical page has been rotated 180 degrees using the PSFCFG parameter EDGEORIENT. To activate this function, ensure the printer writer is ended and type: CHGDTAARA DTAARA(QUSRSYS/QPRTVALS (5 1)) VALUE('Y') This places the character “Y” in the fifth byte of the data area. 10.5.5 Using the IPDS menu PAGE setting This menu item determines how data is positioned on the page at the printer level. 10.5.5.1 PAGE=WHOLE This is the default (that is, use the whole page for printing). Any changes to the positioning of data are made at the host. Changes to the X or Y-OFFSET values, as described later, are an exception, these are changes made at the printer microcode level. The host is unaware of these differences. For Network Printer 24, data may fall into the unprintable area if position 1 in QPRTVALS is set to “Y”. If the printer file FIDELITY keyword is set to *ABSOLUTE and the IPDS MENU VPA CHK item is ON, an IPDS negative acknowledgement (NACK) with sense data X'08C1..00' is generated and the job is held. Figure 163 illustrates the effect of this parameter on the Network Printer 24. Chapter 10. IBM AS/400 network printers 219 Figure 163. Output presentation with PAGE=WHOLE on Network Printer 24 We strongly recommend that you use the PAGE=WHOLE setting for IPDS printing. However, for applications with particular requirements, you can use other page settings, as discussed here. 10.5.5.2 PAGE=PRINT This setting re-positions the logical page origin, attempting to print at least some of the data even if some is lost. The logical page origin is moved to avoid the unprintable border (regardless of whether any data falls in the unprintable area). This is usually done to preserve the data in the top left-hand corner of the page so data to the right of the page or in the lower-right area may be lost (fall in the unprintable border or even off the physical page). Whether an exception is reported depends on the setting of the VPA CHK item (Valid Printable Area Check). If you use the PAGE=PRINT setting, set VPA CHK=OFF. Figure 164 on page 220 illustrates the effect of this parameter on Network Printer 24. 220 IBM AS/400 Printing V Figure 164. Output presentation with PAGE=PRINT We recommend that you use this setting only if you are printing non-critical data. 10.5.5.3 PAGE=COMP1 This setting is similar to PAGE=PRINT except that the lines per inch for any lines of IPDS text are compressed in an attempt to fit the data on the page and keep it out of the unprintable border. This setting is not recommended for new applications. 10.5.5.4 PAGE=COMP2 This setting works the same way as PAGE=COMP1, but with more IPDS positioning commands. Neither of these settings move images, graphics, or barcodes. This setting is not recommended for new applications. Chapter 10. IBM AS/400 network printers 221 10.5.6 Edge-to-edge printing We recommend that you set the IPDS Menu item EDGE-EDGE to ON for AFP printing on Network Printer 12 and Network Printer 17 models. Only these models have edge-to-edge printing ability. 10.5.6.1 Network Printer 12/17 and 24 printable area compatibility The network printers have unprintable borders of 4mm at the edges of the paper (for A4, the unprintable borders on the long edges are 3.86mm). The Network Printer 12 and Network Printer 17 can print to the edge of the paper if the EDGE-to-EDGE item is switched on (in the TWINAX SETUP menu for SCS printing and in the IPDS MENU for IPDS printing). Network Printer 24 cannot print to the edge of the paper. It maintains its unprintable borders as previously explained. Therefore, in a network of mixed Network Printer 12/17 and Network Printer 24 printers, print output might be positioned differently. Normally this is not an issue. If the application uses very precise formatting, or exact alignment with preprinted or electronic forms, steps must be taken to ensure compatible output. This may be achieved either by adjusting host settings, or by adjusting the individual printer settings as follows: • Host adjustment: Rather than manage the setup of multiple individual printers, we prefer that you control adjustments to the logical page at the host. To do this, follow these steps: 1. Set Network Printer 12/17 printers to use edge-to-edge printing. 2. Leave X and Y-offsets (in IPDS MENU) at 0 on all models. 3. Use position 1 of QPRTVALS to align the logical page origin with the physical page origin. 4. Design applications to avoid the unprintable areas of the Network Printer 24. Using this as a basis ensures consistency of output across present and future AFP printers. • Printer adjustments: If the EDGE-EDGE item in the IPDS MENU is set to OFF, the Network Printer 12 and Network Printer 17 unprintable borders are the same as those of the NP24 with the exception that an A4 page has slightly different unprintable borders on the long edge. This is shown in Figure 165 on page 222. The difference between these borders is small (4 - 3.86 mm = 0.14mm). However, if the data in your print application is precise (for example, you need to place a field of text inside a pre-printed box), you might see alignment differences between the Network Printer 12/17 and the Network Printer 24 when you are not using edge-to-edge printing. 222 IBM AS/400 Printing V Figure 165. Relative printable areas of Network Printer 12/17 and Network Printer 24 printers If it is necessary to make an adjustment on the printer, the IPDS MENU has options to adjust the offset of the printed page (that is, adjust the origin at which the logical page is placed on the physical page). For the Network Printer 24 printer, you can move the left-hand unprintable border by 0.14mm to match that of the Network Printer 12/17. However, the permissible values for the X-OFFSET and Y-OFFSET are measured in pels. Because this is a 600-pel printer, (that is, 600 pels per inch), you can calculate the following measurements: • 600 pels = 1 inch • 25.4mm = 1 inch, therefore: • 1 mm = 600 / 25.4 pels • 1 mm = 23.6 pels • 0.14 mm is approximately 2 to 3 pels Therefore, you can set the Network Printer 24s X-OFFSET to 2 or 3 in the IPDS MENU. Note: This affects only IPDS printing, and not PCL or PostScript printing. This should be sufficiently similar to the settings of the Network Printer 12 and Network Printer 17 with edge-to-edge off. We recommend that you do not adjust individual printer settings unless absolutely necessary. Host adjustments, together with edge-to-edge printing, ensures that your output presentation is consistent across your network printer inventory. © Copyright IBM Corp. 2000 223 Chapter 11. Configuring LAN-attached printers Several printer attachment methods are available on the AS/400 system. This appendix provides information on how to configure AS/400 LAN-attached IPDS or ASCII printers. This chapter is divided in two parts: • Configuring LAN-attached IPDS printers • Configuring LAN-attached ASCII printers For considerations on LAN-attached IPDS printers, see 1.4.2, “IPDS printers LAN-attached” on page 16. For considerations on LAN-attached ASCII printers, see 1.4.5, “ASCII printers LAN-attached” on page 19. For a discussion on IPDS printers versus ASCII printers, see 1.6.4, “USERASCII spooled files” on page 25. Note: For additional configuration information, see Ethernet and Token-Ring Configuration Guide, G544-5240. 11.1 Configuring LAN-attached IPDS printers The following IBM AS/400 IPDS printers can be LAN-attached to the AS/400 system: • Any IPDS printer with an IBM Advanced Function Common Control Unit (AFCCU), including: – IBM 3130 – IBM 3160 – Infoprint 60 – Infoprint 62 – Infoprint 2000 – Infoprint 3000 – Infoprint 4000 • IBM AS/400 network printers with the appropriate LAN card, including: – IBM Network Printer 12 – IBM Network Printer 17 – Infoprint 20 – Infoprint 21 – Infoprint 32 – Infoprint 40 – Infoprint 70 Note: For more information on network printers, see Chapter 10, “IBM AS/400 network printers” on page 205. • The IPDS printers IBM 3812, 3816, 3912, 3916, 3112, 3116, 4028, 4230, and 6400 using the I-DATA 7913 Printer LAN Attachment box (TCP/IP Token-Ring or Ethernet) Note: See 11.1.3, “TCP/IP BOOT service for V4R1 and later” on page 237, for information on how to change the I-DATA 7913 setting. The configuration of LAN-attached IPDS printers differ depending on the version and release of the OS/400. This section includes an example for Version 3.0 Release 2.0 and Version 3.0 Release 7.0 and later. 224 IBM AS/400 Printing V Note: For previous releases (V3R1 or V3R6), see 12.1.7, “Configuring LAN-attached IPDS printers” on page 257, for instructions. If your TCP/IP network is not already set up on your AS/400 system, see 12.1.1, “Setting up a TCP/IP network on the AS/400 system” on page 253. The configuration steps are: 1. Check that Print Services Facility/400 (PSF/400) is installed on your system (see 1.3.2.3, “Is PSF/400 installed” on page 11). 2. To avoid any problem, check to have the latest cumulative PTFs installed on your system (see 12.10, “Additional information” on page 278). 3. Complete your printer setup. If your printer is an IBM Network Printer, see 10.3, “Printer setup” on page 209, for detailed information. 4. Create a printer device description. 5. Create a PSF configuration object. 6. Ping the TCP/IP address, vary on the printer, and start the printer writer. For detailed information, see 12.1.3, “Pinging the TCP/IP address” on page 254. 11.1.1 Configuring LAN-attached IPDS printers on V3R2 If you migrate from V3R1 to V3R2, the WRKAFP2 data area is replaced by a PSF configuration object created using the Create PSF Configuration (CRTPSFCFG) command. During the first Start Print Writer (STRPRTWTR) after the migration to V3R2, the system automatically creates a PSF configuration object using the values specified in the data area (WRKAFP2). The name of the PSF configuration object is the same as the printer device description name, and the PSF configuration object is placed into the library QGPL. 11.1.1.1 Creating the device description To create the device description for your printer, follow these steps: 1. Type the Create Device description Printer (CRTDEVPRT) command on any command line, and press the F4 (Prompt) function key. A display appears as shown in Figure 166. Figure 166. Create Device Description (Printer) V3R2 (Part 1 of 6) Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . PRT01 Name Device class . . . . . . . . . . *RMT *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . 0 0, 1, 2, 3, 4, 10, 13, 301... Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 225 2. On this display, enter the following parameter values: • Device description: The name of your printer (in this example, PRT01) • Device class: *RMT • Device type: *IPDS • Device model: 0 Press the Enter key to continue. The display shown in Figure 167 appears. Figure 167. Create Device Description (Printer) V3R2 (Part 2 of 6) 3. On this display, set the Advanced function printing parameter value to *YES. Note: Any IPDS LAN-attached printer must be configured AFP=*YES. Then, press the Enter key to continue. A display appears as shown in Figure 168. Figure 168. Create Device Description (Printer) V3R2 (Part 3 of 6) 4. On this display, set the AFP attachment parameter value to *APPC. Press the Enter key to continue. A display appears like the example shown in Figure 169 on page 226. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *RMT *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... Advanced function printing . . . *YES *NO, *YES Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *RMT *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... Advanced function printing . . . *YES *NO, *YES AFP attachment . . . . . . . . . *APPC *WSC, *APPC Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 226 IBM AS/400 Printing V Figure 169. Create Device Description (Printer) V3R2 (Part 4 of 6) 5. On this display, enter the following parameter values: • Online at IPL: *YES • Font identifier: 11 (or another font ID used as the default font) • Form feed: Specifies the form feed attachment used for this printer. Enter *AUTOCUT for a page printer, or *CONT for a continuous forms printer (in this example, *AUTOCUT). Leave the default values for the other parameters, and press the Enter key to continue. The display shown in Figure 170 appears. Figure 170. Create Device Description (Printer) V3R2 (Part 5 of 6) Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *RMT *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... Advanced function printing . . . *YES *NO, *YES AFP attachment . . . . . . . . . *APPC *WSC, *APPC Online at IPL . . . . . . . . . *YES 0-65535 Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Bottom. F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *RMT *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... Advanced function printing . . . *YES *NO, *YES AFP attachment . . . . . . . . . *APPC *WSC, *APPC Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Printer error message . . . . . *INQ *INQ, *INFO More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 227 You can leave the default value *INQ for the Printer error message parameter. To continue, press the Page Down key. The display shown in Figure 171 appears. Figure 171. Create Device Description (Printer) V3R2 (Part 6 of 6) 6. Enter any name for the Remote location parameter (in this example, TCPIP) and a text description for device configuration object. You can leave the default parameter values for the other parameters. Then, press the Enter key to create the device description. You receive the message Description for device PRT01 created. 11.1.1.2 Creating the PSF configuration object for V3R2 To create the PSF configuration support, follow these steps: 1. Type the Create PSF Configuration (CRTPSFCFG) command on any command line, and press F4 (Prompt). The display shown in Figure 172 on page 228 appears. Create Device Desc (Printer) (CRTDEVPRT) Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Maximum pending request . . . . 6 1-31 Print while converting . . . . . *YES *NO, *YES Print request timer . . . . . . *NOMAX 1-3600, *NOMAX Form definition . . . . . . . . F1C10110 Name Library . . . *LIBL Name, *LIBL, *CURLIB Character identifier: Name, *LIBL, *CURLIB Graphic character set . . . . *SYSVAL 1-32767, *SYSVAL Code page . . . . . . . . . . 1-32767 Remote location . . . . . . . . TCPIP Name Local location . . . . . . . . *NETADR Name, *NETADR Remote network identifier . . . *NETADR Name. *NETADR, *NONE Mode . . . . . . . . . . . . . . QSPWTR Name, SPWTR, *NETADR Text description . . . . . . . . Device description for PRT01 Dependent location name . . . . *NONE Name, *NONE Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 228 IBM AS/400 Printing V Figure 172. Create PSF Configuration object V3R2 (Part 1 of 3) 2. On this display, enter the following parameter values: • PSF configuration: Enter the name of the PSF configuration object. Must be the same name as the name of the printer device description (in this example, "PRT01"). • Library: QGPL (the default or any library name). • User resource library list: Specifies the user resource library list to be used for searching AFP resources. • Device resource library list: Specifies the device resource library list to be used for searching AFP resources (in this example, *JOBLIBL). • IPDS pass through: IPDS pass through reduces the PSF/400 conversion time for some *SCS and *IPDS spooled files. Enter *YES or *NO (in this example, *YES). • Activate release timer: Specifies the point at which the release timer (RLSTMR) is activated. Leave the default value "*NORDYF". • Release timer: This is the timer whose value is referenced by the Activate release timer (ACTRLSTMR) parameter. If the ACTRLSTMR parameter is set to *NORDYF, the release timer parameter specifies the amount of time to wait after the last page of the last ready spooled file has printed before releasing the printer (in this example, *SEC15). Note: If only one system is using the printer, specify *NOMAX. There is no need to release the printer for another system. • Restart timer: Specifies the amount of time to wait before the printer writer attempts to re-establish either a session or dialog. • SNA retry count: Specifies the number of retry attempts to establish a session. This is the number of retries that PSF/400 makes to establish a connection with a printer. Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. PSF configuration . . . . . . . > PRT01 Name Library . . . . . . . . . . . > QGPL Name, *CURLIB User resource library list . . . *JOBLIBL *JOBLIBL, *CURLIB, *PRTF... Device resource library list . . *DFT Name, *DFT + for more values IPDS pass through . . . . . . . *Yes *NO, *YES Activate release timer . . . . . *NORDYF *NORDYF, *IMMED... Release timer . . . . . . . . . *SEC15 1-1440, *NOMAX, *SEC15... Restart timer . . . . . . . . . *IMMED 1-1440, *IMMED SNA retry count . . . . . . . . 2 1-99, *NOMAX Delay time between SNA retries 10 0-999 Text 'description' . . . . . . . PSF configuration object for PRT01 More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 229 Note: Even if the parameter name is “SNA retry count”, this is also valid for TCP/IP when the PTF SF42745 (V3R2) is installed on the system. • Delay time between retries: 10 • Text 'description': A description for your PSF configuration object To continue, press F10 (additional parameters) and then the Page Down key. The display shown in Figure 173 appears. Figure 173. Create PSF Configuration object V3R2 (Part 2 of 3) 3. Enter the following parameter values: • Blank page: Specifies whether PSF/400 issues a blank page after every separator page and spooled file copy that contains an odd number of pages. This parameter is for a continuous forms printer. • Page size control: Specifies whether the page size (forms) in the printer is set by PSF/400. This parameter only applies to: IBM 4230, 4247, 4028, 6404, 6408, 6412, and IBM network printers. Note: If you change the drawers for using different paper sizes, enter *YES for this parameter. • Resident fonts: Specifies if the printer resident fonts are used by PSF/400. • Resource retention: Specifies whether resource retention across spooled files is enabled. • Edge orient: When the page rotation value of a spooled file is *COR or *AUTO and the system rotates the output, a 90-degree rotation is normally used. When this parameter is *YES, PSF/400 rotates the output 270 degrees instead of 90 degrees. • Remote location name: The IP address of your printer (in this example, 9.99.99.999). • TCP/IP port: 5001 • TCP/IP activation timer: *NOMAX Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. Additional Parameters Blank page . . . . . . . . . . . *NO *YES, *NO Page size control . . . . . . . *NO *NO, *YES Resident fonts . . . . . . . . . *YES *YES, *NO Resource retention . . . . . . . *YES *YES, *NO Edge orient . . . . . . . . . . *NO *YES, *NO Remote location: Name or address . . . . . . . '9.99.99.999' TCP/IP port . . . . . . . . . . 5001 1-65535, *NONE TCP/IP activation timer . . . . *NOMAX 1-2550, *NOMAX More... F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 230 IBM AS/400 Printing V Note: If only one AS/400 system uses the printer, use the default value (170 seconds). If more than one system shares the printer, set the value to *NOMAX, which causes PSF/400 to wait to establish a connection. To continue, press the Page Down key. The display shown in Figure 174 appears. Figure 174. Create PSF Configuration object V3R2 (Part 3 of 3) 4. Leave the default parameters values, and press the Enter key to create the PSF configuration object. 11.1.2 Configuring LAN-attached IPDS printers on V3R7 and later If you migrate from V3R1, V3R2, or V3R6 to V3R7 or later, always delete all the printer device descriptions and the associated WRKAFP2-created data areas (V3R1 and V3R6). You can check that all objects are deleted by using the Work with Objects (WRKOBJ) command and specifying the name of the printer as the object name. 11.1.2.1 Creating a device description To create the device description for your printer, complete these steps: 1. Type the Create Device description Printer (CRTDEVPRT) command on any command line and press F4 (Prompt). The display shown in Figure 175 appears. Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. PSF defined option . . . . . . . *NONE *NONE + for more values Replace . . . . . . . . . . . . *YES *YES, *NO Authority . . . . . . . . . . . *LIBCRTAUT Name, *LIBCRTAUT, *CHANGE... Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 231 Figure 175. Create Device Description-V3R7 and later (Part 1 of 6) 2. On this display, enter the following parameter values: • Device description: The name of your printer (in this example, "PRT01") • Device class: *LAN • Device type: *IPDS • Device model: 0 Then, press the Enter key to continue. The display shown in Figure 176 appears. Figure 176. Create Device Description-V3R7 and later (Part 2 of 7) On this display, set the LAN attachment parameter value to *IP. To continue, press the Enter key. The display shown in Figure 177 on page 232 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . PRT01 Name Device class . . . . . . . . . . *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . 0 0, 1, 2, 3, 4, 10, 13, 301... Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 232 IBM AS/400 Printing V Figure 177. Create Device Description-V3R7 and later (Part 3 of 7) 3. On this display, leave the default value *YES for the advanced function printing parameter. To continue, press the Enter key. The display shown in Figure 178 appears. Figure 178. Create Device Description-V3R7 and later (Part 4 of 7) 4. On this display, enter the following parameter values: • Port number: 5001 • Online at IPL: *YES • Font identifier: 11 (or another font ID used as the default font) • Form feed: Specifies the form feed attachment used for this printer. Enter *AUTOCUT for page printer, or *CONT for a continuous forms printer (in this example, *AUTOCUT). Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Advanced function printing . . . *YES *NO, *YES Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Advanced function printing . . . *YES *NO, *YES Port number . . . . . . . . . . 5001 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 233 Leave the default values for the other parameters, and press the Enter key to continue. The display shown in Figure 179 appears. Figure 179. Create Device Description-V3R7 and later (Part 5 of 7) You can leave the default value *INQ for the printer error message parameter. To continue, press the Page Down key. The display shown in Figure 180 appears. Figure 180. Create Device Description-V3R7 and later (Part 6 of 7) Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > PRT01 Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > *IPDS 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 0 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Advanced function printing . . . *YES *NO, *YES Port number . . . . . . . . . . 5001 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Printer error message . . . . . *INQ *INQ, *INFO More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Message queue . . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . *NOMAX 1-2550, *NOMAX Maximum pending request . . . . 6 1-31 Print while converting . . . . . *YES *NO, *YES Print request timer . . . . . . *NOMAX 1-3600, *NOMAX Form definition . . . . . . . . F1C10110 Name Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Remote location: Name or address '9.99.99.99' User-defined options . . . . . . *NONE Name, *NONE + for more values More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 234 IBM AS/400 Printing V 5. On this display, enter the following parameter values: • Activation timer: *NOMAX Note: If only one AS/400 system uses the printer, use the default value (170 seconds). If more than one system shares the printer, set the value to *NOMAX, which causes PSF/400 to wait to establish a connection. • Remote location: The IP address of your printer (in this example, 9.99.99.99). You can leave the default values for the other parameters. To continue, press the Page Down key. The display shown in Figure 181 appears. Figure 181. Create Device Description-V3R7 and later (Part 7 of 7) On this display, enter the following parameter values: • User-defined object: The name of the PSF configuration object (the one created in the next step with the CRTPSFCFG command, in this example, NP17) • Library: Any library name (in this example, QGPL) • Object type: *PSFCFG • Text 'description': A text description for your printer configuration object You can leave the default parameter values for the other parameters. Then, press the Enter key to create the device description. You will receive the message Description for device PRT01 created. 11.1.2.2 Creating the PSF configuration object To create the PSF configuration support, follow these steps: 1. Enter the Create PSF configuration (CRTPSFCFG) command on any command line, and press F4 (Prompt). The display shown in Figure 182 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. User-defined object: Object . . . . . . . . . . . . NP17 Name, *NONE Library . . . . . . . . . . QGPL Name, *LIBL, *CURLIB Object type . . . . . . . . . *PSFCFG *DTAARA, *DTAQ, *FILE... Data transform program . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB User-defined driver program . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Text 'description' . . . . . . . Device description for PRT01 Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 235 Figure 182. Create PSF Configuration object-V3R7 and later (Part 1 of 3) 2. On this display, enter the following parameter values: • PSF configuration: Any name, but must correspond to the name specified in the DEVD user-defined object parameter. Note: The same PSF configuration object can be used for more than one printer. • Library: Any library name, but must correspond to the name specified in the DEVD user-defined object library parameter. • User resource library list: Specifies the user resource library list to be used for searching AFP resources. • Device resource library list: Specifies the device resource library list to be used for searching AFP resources. • IPDS pass through: IPDS pass through reduces the PSF/400 conversion time for some *SCS or *IPDS spooled files. Enter *YES or *NO (in this example, *YES). • Activate release timer: Specifies the point at which the release timer (RLSTMR) is activated. Leave the default value NORDYF. • Release timer: This is the timer whose value is referenced by the Activate Release Timer (ACTRLSTMR) parameter. If the ACTRLSTMR parameter is set to *NORDYF, the release timer parameter specifies the amount of time to wait after the last page of the last ready spooled file has printed before releasing the printer (in this example, *SEC15). Note: If only one system is using the printer, specify *NOMAX. There is no need to release the printer for another system. • Restart timer: Specifies the amount of time to wait before the printer writer attempts to re-establish either a session or dialog. Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. PSF configuration . . . . . . . > NP17 Name Library . . . . . . . . . . . QGPL Name, *CURLIB User resource library list . . . *JOBLIBL *JOBLIBL, *CURLIB, *PRTF... Device resource library list . . *DFT Name, *DFT + for more values IPDS pass through . . . . . . . *YES *NO, *YES Activate release timer . . . . . *NORDYF *NORDYF, *IMMED... Release timer . . . . . . . . . > *SEC15 1-1440, *NOMAX, *SEC15... Restart timer . . . . . . . . . *IMMED 1-1440, *IMMED APPC and TCP/IP retry count . . 15 1-99, *NOMAX Delay between APPC retries . . . 90 0-999 Automatic session recovery . . . *NO *NO, *YES Acknowledgment frequency . . . . 100 1-32767 Text 'description' . . . . . . . PSF configuration object F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 236 IBM AS/400 Printing V • APPC and TCP/IP retry count: Named SNA retry count in V3R7 and V4R1. Specifies the number of retry attempts to establish a session. This is the number of retries that PSF/400 makes to establish a connection with a printer. Note: Even if the name is “SNA retry count” in V3R7 and V4R1, this is also valid for TCP/IP when PTF SF42655 (V3R7) or SF43250 (V4R1) is installed on the system. • Delay time between retries: 90 (the default value). • Automatic session recovery (V4R2): Specifies whether PSF/400 automatically attempts to resume printing when a session has been unexpectedly ended by a device. • Acknowledgement frequency (V4R2): Specifies the frequency, in pages, with which PSF/400 sends IPDS acknowledgment requests to a printer. The acknowledgment request responses from the printer contain information as to the status of pages sent to the printer. • Text 'description': A description for your PSF configuration object. To continue, press F10 (additional parameters) and then the Page Down key. The display shown in Figure 183 appears. Figure 183. Create PSF Configuration object V3R7 and later (Part 2 of 3) 3. Enter the following parameter values: • Blank page: Specifies whether PSF/400 issues a blank page after every separator page and spooled file copy that contains an odd number of pages. This parameter is for continuous forms printers. • Page size control: Specifies whether the page size (forms) in the printer is set by PSF/400. This parameter only applies to the following printers: IBM 4230, 4247, 4028, 6404, 6408, 6412, and IBM network printers. Note: If you change the drawers for using different paper sizes, enter *YES for this parameter. Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. Additional Parameters Blank page . . . . . . . . . . . *NO *YES, *NO Page size control . . . . . . . *NO *NO, *YES Resident fonts . . . . . . . . . *YES *YES, *NO Resource retention . . . . . . . *YES *YES, *NO Edge orient . . . . . . . . . . *NO *YES, *NO Use outline fonts . . . . . . . *YES *YES, *NO PSF defined option . . . . . . . *NONE + for more values Font substitution messages . . . *YES *YES, *NO Capture host fonts at printer . *NO *NO, *YES Cut sheet emulation mode . . . . *NONE *NONE, *CHKFIRST, *CHKALL Replace . . . . . . . . . . . . *YES *YES, *NO Authority . . . . . . . . . . . *LIBCRTAUT Name, *LIBCRTAUT, *CHANGE... Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 237 • Resident fonts: Specifies if the printer resident fonts are used by PSF/400. • Resource retention: Specifies whether the resource retention across spooled files is enabled. • Edge orient: When the page rotation value of a spooled file is *COR or *AUTO and the system rotates the output, 90 degree rotation is normally used. When this parameter is *YES, PSF/400 rotates the output 270 degrees instead of 90 degrees. • Use Outline fonts: Specifies whether the user wants the requested downloadable AFP raster fonts replaced with the equivalent downloadable outline fonts. Note: In V3R7 and V4R1, the Remote location name, TCP/IP port, and Activation timer parameters are displayed in the CRTPSFCFG command. They are ignored, since they are part of the printer device description. • Font substitution messages (V4R2): Specifies whether PSF/400 logs the font substitution message. • Capture host fonts (V4R2): Specifies whether the printer should capture host downloaded fonts. See 4.10, “Font capturing” on page 108, for detailed information on font capturing. • Cut sheet emullation (V4R2): This parameter is for continuous forms printers. It specifies to what degree PSF/400 will do size checking of the document when using Cut Sheet Emulation. To continue, press the Page Down key. The display shown in Figure 184 appears. Figure 184. Create PSF Configuration object V3R7 and later (Part 3 of 3) Leave the default parameter values, and press the Enter key to create the PSF configuration object. 11.1.3 TCP/IP BOOT service for V4R1 and later Bootstrap Protocol (BOOTP) provides a dynamic method for associating workstations with servers and assigning IP addresses. The BOOTP Server is used to configure and provide support for the I-DATA-7913 Lan attachment. This attachment can be use to connect Twinax or Coax IPDS printers to the AS/400 system. Figure 185 on page 238 shows the Add BOOTP Table Entry display. Create PSF Configuration (CRTPSFCFG) Type choices, press Enter. PSF defined option . . . . . . . *NONE *NONE + for more values Replace . . . . . . . . . . . . *YES *YES, *NO Authority . . . . . . . . . . . *LIBCRTAUT Name, *LIBCRTAUT, *CHANGE... Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys 238 IBM AS/400 Printing V Figure 185. Add BOOTP Table Entry display The parameters are explained here: • Client Host name: The name of the client host system. • Mac address: The physical network address of the hardware that the client uses to access the network. • IP address: The Internet Protocol (IP) address defined for the client. • Hardware type: The type of network connection hardware the client is using to access the network. Valid values for hardware type are: – One Ethernet – Six Token-Ring or IEEE Ethernet (802.3) • Gateway IP address: The gateway IP address of the network on which the client is loaded. • Subnet mask: The subnet mask of the network on which the client is loaded. 11.2 Configuring LAN-attached ASCII printers ASCII printers can be attached directly on the LAN (Token-Ring or Ethernet) using the following connection methods: • PJL drivers *IBMPJLDRV or *HPPJLDRV (this support is available on OS/400 V3R7 and later releases) • SNMP drivers 11.2.1 Configuring LAN-attached ASCII printers using LexLink The following configuration example is for V3R7 and later. For prior releases (V3R1, V3R2, and V3R6), refer to Chapter 1 in AS/400 Printing IV, GG24-4389. If you migrate from V3R1, V3R2, or V3R6 to V3R7 and later, we recommend that you delete the device descriptions for any ASCII printer LAN-attached using the LexLink protocol, and then re-create them (see Figure 186). Add BOOTP Table Entry System: SYSTEM05 Network device: Client host name . . . prt7913 MAC address . . . . . . 098390907747A IP address . . . . . . 99.99.99.99 Hardware type . . . . . 6 Network routing: Gateway IP address . . 99.99.99.99 Subnet mask . . . . . . 99.999.99.99 Boot: Type . . . . . . . . . File name . . . . . . . File path . . . . . . . F3=Exit F12=Cancel Chapter 11. Configuring LAN-attached printers 239 To create the device description for your printer, follow these steps: 1. Type the Create Device description Printer (CRTDEVPRT) command on the command line, and press F4 (Prompt). Figure 186. CRTDEVPRT for LAN-attached ASCII printer using LexLink (Part 1 of 3) 2. Enter the following parameter values: • Device description: The name of your printer (in this example, MYPRT) • Device class: *LAN • Device type: 3812 • Device model: 1 • LAN attachment: *LEXLINK • LAN remote adapter address: Specifies the 12-character hexadecimal LAN address of the ASCII printer. Note: If an internal INA card is used, display the address using the printer's operator panel. For a MarkNet XLe, the address is printed on the back side of the device. • Adapter type: Specify *INTERNAL if an internal INA card is used or *External if a MarkNet XLe is used. • Port number: For the MarkNet XLe, use the following values: – 0 for serial port – 1 for parallel port – 2 for parallel port 2 Note: This parameter does not appear if the adapter type is *INTERNAL. • Online at IPL: *YES • Font identifier: 11 (or another font ID used as the default font) Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > MYPRT Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *LEXLINK *LEXLINK, *IP, *USRDFN Switched line list . . . . . . . Name + for more values LAN remote adapter address . . . 10005A1095A2 000000000001-FFFFFFFFFFFE Adapter type . . . . . . . . . . > *EXTERNAL *INTERNAL, *EXTERNAL Adapter connection type . . . . *PARALLEL *PARALLEL, *SERIAL Port number . . . . . . . . . . 1 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancelre F13=How to use this display F24=More keys 240 IBM AS/400 Printing V • Form feed: Specifies the form feed attachment used for this printer. Enter *AUTOCUT for page printer or *CONT for a continuous forms printer (in this example, *AUTOCUT). Figure 187. CRTDEVPRT for LAN-attached ASCII printer using LexLink (Part 2 of 3) 3. On the display shown in Figure 187, enter the following parameter values: • Activation timer: *NOMAX Note: If only one AS/400 system uses the printer, leave the default value (170). If more than one system shares the printer, set the value to *NOMAX, which causes the writer to wait to establish a connection. • Inactivity timer: Specifies the amount of time the printer writer keeps a lock on the device before releasing it (in this example, *SEC15). Note: If only one system is using the printer, specify *NOMAX, no need to release the printer for another system. • Host print transform: *YES or *NO, but normally *YES since the spooled files from the AS/400 system must be transformed from EBCDIC to ASCII. • Manufacturer type, model: Enter a value according your printer type (in this example, *IBM4312). • Paper source 1: Enter your default paper format (in this example, *A4). • Paper source 2: Enter your default paper format (in this example, *A4). • Envelope source: Enter your default envelope format (in this example, *C5). You can leave the default parameter values for the other parameters. To continue, press the Page Down key. The display shown in Figure 188 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Printer error message . . . . . *INQ *INQ, *INFO Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . *NOMAX 1-2550, *NOMAX Inactivity timer . . . . . . . . *SEC15 1-30, *ATTACH, *NOMAX... Host print transform . . . . . . *YES *NO, *YES Manufacturer type and model . . *IBM4312 Paper source 1 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Paper source 2 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Envelope source . . . . . . . . *B5 *MFRTYPMDL, *MONARCH... ASCII code page 899 support . . *NO *NO, *YES More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 241 Figure 188. CRTDEVPRT for LAN-attached ASCII printer using LexLink (Part 3 of 3) 4. Enter a text description for your printer configuration object. You can leave the default parameter values for the other parameters. Then press the Enter key to create the device description. 11.2.2 Configuring LAN-attached ASCII printers using PJL drivers To be LAN attached with the PJL driver, your printer must support Printer Job Language (PJL) and PCL. See 12.1.5, “Print Job Language (PJL) support” on page 255, for more information. Before starting the configuration, check for the following PTFs by using the Display Program Temporary Fix (DSPPTF) command: • V3R7: SF43497, SF44339, and SF45336 • V4R1 and V4R2: Part of the base code Note: The PJL drivers are not supported on V3R2. To create the device description for your printer, follow these steps: 1. Type the Create Device description Printer (CRTDEVPRT) command on any command line, and press F4 (Prompt). The display shown in Figure 189 on page 242 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Character identifier: Graphic character set . . . . *SYSVAL 1-32767, *SYSVAL Code page . . . . . . . . . . 1-32767 User-defined options . . . . . . *NONE Name, *NONE + for more values User-defined object: Object . . . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . Name, *LIBL, *CURLIB Object type . . . . . . . . . *DTAARA, *DTAQ, *FILE... Data transform program . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB User-defined driver program . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Text 'description' . . . . . . . Device description MYPRT(NP12) Bottom 242 IBM AS/400 Printing V Figure 189. CRTDEVPRT for the LAN-attached ASCII printer using the PJL driver (Part 1 of 7) 2. On this display, enter the following parameter values: • Device description: The name of your printer (in this example, NPLAN) • Device class: *LAN • Device type: 3812 • Device model: 1 Then, press the Enter key to continue. The display shown in Figure 190 appears. Figure 190. CRTDEVPRT for LAN-attached ASCII printer using PJL driver (Part 2 of 7) 3. On this display, set the LAN attachment parameter value to *IP. To continue, press the Enter key. The display shown in Figure 191 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . NPLAN Name Device class . . . . . . . . . . *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . 1 0, 1, 2, 3, 4, 10, 13, 301... Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 243 Figure 191. CRTDEVPRT for the LAN-attached ASCII printer using the PJL driver (Part 3 of 7) 4. On this display, enter the following parameter values: • Port number: Specify 2501 for IBM Network printers (IBM 4312, 4317, and 4324) or 9100 for all HP, Lexmark, and most IBM printers. Note: For more information on port number, see 12.1.4, “Port number” on page 254. • Online at IPL: *YES • Font identifier: 11 (or another font ID used as the default font) • Form feed: Specifies the form feed attachment used for this printer. Enter *AUTOCUT for a page printer or *CONT for a continuous forms printer (in this example, *AUTOCUT). Leave the default values for the other parameters and press the Enter key to continue. The display shown in Figure 192 on page 244 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Port number . . . . . . . . . . 2501 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 244 IBM AS/400 Printing V Figure 192. CRTDEVPRT for the LAN-attached ASCII printer using the PJL driver (Part 4 of 7) You can leave the default value *INQ for the Printer error message parameter. To continue, press the Page Down key. The display shown in Figure 193 appears. Figure 193. CRTDEVPRT for the LAN-Attached ASCII Printer using the PJL driver (Part 5 of 7) 5. On this display, enter the following parameter values: • Activation timer: 170 • Inactivity timer: Specifies the amount of time the printer writer keeps a lock on the device before releasing it (in this example, *SEC15). Note: If only one system is using the printer, specify *NOMAX, no need to release the printer for another system. You can leave the default values for the other parameters. To continue, press the Enter key. The display shown in Figure 194 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Port number . . . . . . . . . . 2501 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Printer error message . . . . . *INQ *INQ, *INFO More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . 170 1-2550, *NOMAX Inactivity timer . . . . . . . . *SEC15 1-30, *ATTACH, *NOMAX Host print transform . . . . . . *YES *NO, *YES Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 245 Figure 194. CRTDEVPRT for the LAN-attached ASCII printer using the PJL driver (Part 6 of 7) 6. On this display, enter the following parameter values: • Manufacturer type, model: Enter a value according your printer type (in this example, *IBM4317). • Paper source 1: Enter your default paper format (in this example, *A4). • Paper source 2: Enter your default paper format (in this example, *A4). • Envelope source: Enter your default envelope format (in this example, *C5). You can leave the default parameter values for the other parameters. To continue, press the Page Down key. The display shown in Figure 195 on page 246 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . 170 1-2550, *NOMAX Inactivity timer . . . . . . . . *SEC15 1-30, *ATTACH, *NOMAX... Inactivity timer . . . . . . . . *ATTACH 1-30, *ATTACH, *NOMAX... Type of parity . . . . . . . . . > *NONE *TYPE, *EVEN, *ODD, *NONE... Host print transform . . . . . . *YES *NO, *YES Manufacturer type and model . . *IBM4317 Paper source 1 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Paper source 2 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Envelope source . . . . . . . . *C5 *MFRTYPMDL, *MONARCH... ASCII code page 899 support . . *NO *NO, *YES Character identifier: Graphic character set . . . . *SYSVAL 1-32767, *SYSVAL Code page . . . . . . . . . . 1-32767 More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 246 IBM AS/400 Printing V Figure 195. CRTDEVPRT for the LAN-attached ASCII printer using the PJL driver (Part 7 of 7) 7. On this display, enter the following parameter values: • Remote location: The IP address of your printer (in this example, "9.99.80.145"). • System driver program: *IBMPJLDRV Note: For which drivers to use, depending on the target printer, see 12.1.5, “Print Job Language (PJL) support” on page 255. • Text 'description': A description for your printer configuration object. You can leave the default parameter values for the other parameters. 8. Press the Enter key to create the device description. You receive the message Description for device NPLAN created. If you have any problems after the configuration, see 12.1, “Communication, connection, and configuration problems” on page 253, for detailed information. 11.2.3 Configuring LAN-attached ASCII printers using SNMP drivers With OS/400 V4R5, a new PCL driver, the SNMP driver, is added. Simple Network Management Protocol (SNMP) is a standard TCP/IP network protocol. The SNMP print driver provides the functionality of the PJL driver but does not require the target printer to support PJL commands. To use the SNMP print driver, the following rules apply: • For the SNMP print driver to work with a specific printer, the printer must support the industry-standard Host Resource Management Information Base (RFC 1514). We highly recommend (but it is not required) that the printer also support the Printer Management Information Base (RFC 1759). • If the printer is connected to a network adapter, the adapter must also be compatible with RFC 1514. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Remote location: Name and address . . . . . . . 9.99.80.145 User-defined options . . . . . . *NONE Name, *NONE + for more values User-defined object: Object . . . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . Name, *LIBL, *CURLIB Object type . . . . . . . . . *DTAARA. *DTAQ, *FILE... Data transform program . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURRENT System driver program . . . . . *IBMPJLDRV Text 'description' . . . . . . . Device description for NPLAN printer Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 247 • If the printer is connected to an external network adapter that has more than one port, the printer should be connected to the first parallel port, and there should be no other SNMP-capable devices attached to the adapter. • The printer and any adapter connected with the SNMP print driver must have set the community to public. This is normally the default setting. Read-only access to the public community is sufficient. Note: Additional information on the SNMP print driver can be found in APAR II03291. Support for the SNMP print driver with IBM Infoprint 21 is also available at OS/400 V4R4 and V4R3. To create the device description for your printer, follow this process: 1. Type the Create Device description Printer (CRTDEVPRT) command on any command line, and press F4 (Prompt). The display shown in Figure 196 appears. Figure 196. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 1 of 7) 2. On this display, enter the following parameter values: • Device description: The name of your printer (in this example, NPLAN) • Device class: *LAN • Device type: 3812 • Device model: 1 Then, press the Enter key to continue. The display shown in Figure 197 on page 248 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . NPLAN Name Device class . . . . . . . . . . *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . 1 0, 1, 2, 3, 4, 10, 13, 301... Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 248 IBM AS/400 Printing V Figure 197. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 2 of 7) 3. On this display, set the LAN attachment parameter value to *IP. To continue, press the Enter key. The display shown in Figure 198 appears. Figure 198. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 3 of 7) 4. On this display, enter the following parameter values: • Port number: Specify 2501 for IBM Network printers (IBM 4312, 4317, and 4324), or 9100 for all HP, Lexmark, and most IBM printers. Note: For more information on port number, see 12.1.4, “Port number” on page 254. • Online at IPL: *YES • Font identifier: 11 (or another font ID used as the default font) • Form feed: Specifies the form feed attachment used for this printer. Enter *AUTOCUT for a page printer or *CONT for a continuous forms printer (in this example, *AUTOCUT). Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Port number . . . . . . . . . . 2501 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 249 Leave the default values for the other parameters, and press the Enter key to continue. The display shown in Figure 192 appears. Figure 199. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 4 of 7) You can leave the default value *INQ for the printer error message parameter. To continue, press the Page Down key. The display shown in Figure 200 appears. Figure 200. CRTDEVPRT for the LAN-Attached ASCII printer using the SNMP driver (Part 5 of 7) 5. On this display, enter the following parameter values: • Activation timer: *NOMAX Note: If only one AS/400 system uses the printer, use the default value (170 seconds). If more than one system shares the printer, set the value to *NOMAX, which causes the AS/400 system to wait to establish a connection. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Device description . . . . . . . > NPLAN Name Device class . . . . . . . . . . > *LAN *LCL, *RMT, *VRT, *SNPT, *LAN Device type . . . . . . . . . . > 3812 3287, 3812, 4019, 4201... Device model . . . . . . . . . . > 1 0, 1, 2, 3, 4, 10, 13, 301... LAN attachment . . . . . . . . . *IP *LEXLINK, *IP, *USRDFN Port number . . . . . . . . . . 2501 0-65535 Online at IPL . . . . . . . . . *YES *YES, *NO Font: Identifier . . . . . . . . . . > 11 3, 5, 11, 12, 13, 18, 19... Point size . . . . . . . . . . *NONE 000.1-999.9, *NONE Form feed . . . . . . . . . . . *AUTOCUT *TYPE, *CONT, *CUT, *AUTOCUT Separator drawer . . . . . . . . *FILE 1-255, *FILE Separator program . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURLIB Printer error message . . . . . *INQ *INQ, *INFO More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . *NOMAX 1-2550, *NOMAX Inactivity timer . . . . . . . . *SEC15 1-30, *ATTACH, *NOMAX Host print transform . . . . . . *YES *NO, *YES Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 250 IBM AS/400 Printing V • Inactivity timer: Specifies the amount of time the printer writer keeps a lock on the device before releasing it (in this example, *SEC15). Note: If only one system is using the printer, specify *NOMAX. There is no need to release the printer for another system. • Host print transform: *YES or *NO, but normally *YES as the spooled files from the AS/400 system must be transformed from EBCDIC to ASCII. You can leave the default values for the other parameters. To continue, press the Enter key. The display shown in Figure 201 appears. Figure 201. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 6 of 7) 6. On this display, enter the following parameter values: • Manufacturer type, model: Enter a value according your printer type (in this example, *IBM4317). • Paper source 1: Enter your default paper format (in this example, *A4). • Paper source 2: Enter your default paper format (in this example, *A4). • Envelope source: Enter your default envelope format (in this example, *C5). You can leave the default parameter values for the other parameters. To continue, press the Page Down key. The display shown in Figure 202 appears. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Message queue . . . . . . . . . QSYSOPR Name, QSYSOPR Library . . . . . . . . . . . *LIBL Name, *LIBL, *CURLIB Activation timer . . . . . . . . 170 1-2550, *NOMAX Inactivity timer . . . . . . . . *SEC15 1-30, *ATTACH, *NOMAX... Inactivity timer . . . . . . . . *ATTACH 1-30, *ATTACH, *NOMAX... Type of parity . . . . . . . . . > *NONE *TYPE, *EVEN, *ODD, *NONE... Host print transform . . . . . . *YES *NO, *YES Manufacturer type and model . . *IBM4317 Paper source 1 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Paper source 2 . . . . . . . . . *A4 *MFRTYPMDL, *LETTER... Envelope source . . . . . . . . *C5 *MFRTYPMDL, *MONARCH... ASCII code page 899 support . . *NO *NO, *YES Character identifier: Graphic character set . . . . *SYSVAL 1-32767, *SYSVAL Code page . . . . . . . . . . 1-32767 More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys Chapter 11. Configuring LAN-attached printers 251 Figure 202. CRTDEVPRT for the LAN-attached ASCII printer using the SNMP driver (Part 7 of 7) 7. On this display, enter the following parameter values: • Remote location: The IP address of your printer (in this example, 9.99.80.145). • User-defined options: *IBMSHRCNN causes the SNMP print driver to open and close the data port on the printer for every copy of every spooled file. This enables multiple writers and systems to share the printer. If this option is specified, the Inactivity Time is ignored. This option must be specified for the IBM Infoprint 21 printer. • System driver program: *IBMSNMPDRV Note: For which drivers to use, depending on the target printer, see 12.1.5, “Print Job Language (PJL) support” on page 255. • Text 'description': A description for your printer configuration object. You can leave the default parameter values for the other parameters. 8. Then, press the Enter key to create the device description. You receive the message Description for device NPLAN created. If you have any problems after the configuration, see 12.1, “Communication, connection, and configuration problems” on page 253, for detailed information. Create Device Desc (Printer) (CRTDEVPRT) Type choices, press Enter. Remote location: Name and address . . . . . . . 9.99.80.145 User-defined options . . . . . . *IBMSHRCNN Name, *NONE + for more values User-defined object: Object . . . . . . . . . . . . *NONE Name, *NONE Library . . . . . . . . . . Name, *LIBL, *CURLIB Object type . . . . . . . . . *DTAARA. *DTAQ, *FILE... Data transform program . . . . . *NONE Name, *NONE Library . . . . . . . . . . . Name, *LIBL, *CURRENT System driver program . . . . . *IBMSNMPDRV Text 'description' . . . . . . . Device description for NPLAN printer Bottom F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys 252 IBM AS/400 Printing V © Copyright IBM Corp. 2000 253 Chapter 12. Problem determination techniques This chapter discusses problems related to installing and driving printers. It also presents methods and techniques that may be used to isolate the source of the problems. It points to various documentation sources where additional information can be found. It is not, however, a substitute for problem-determination methods described in the system, device manuals, or online help. No performance problems are discussed in this chapter. However, you can find details of this in Appendix A, “PSF/400 performance factors” on page 279. For detailed information on printer configuration, see Chapter 11, “Configuring LAN-attached printers” on page 223. If you are using a remote output queue, see Chapter 8, “Remote system printing” on page 171. 12.1 Communication, connection, and configuration problems This topic covers printer problems related to communication, connection, and configuration. For the different printer attachment methods, see 1.4, “AS/400 printer attachment methods” on page 15. 12.1.1 Setting up a TCP/IP network on the AS/400 system To drive a TCP/IP attached printer, the TCP/IP subsystem must be properly configured and has to be up and running. Use the following steps to set up a TCP/IP network on the AS/400 system: 1. Create a Token-Ring or Ethernet line description using the CRTLINTRN or CRTLINETH command. 2. Vary on the line description using the VRYCFG command. 3. Add a TCP/IP interface using the ADDTCPIFC command. 4. Start TCP/IP interface using the STRTCPIFC command. 5. Add a TCP/IP route definition, if necessary, using the ADDTCPRTE command. 6. Start TCP/IP with the STRTCP command. If your TCP/IP network is already implemented, use the WRKACTJOB command to check if QTCPIP is running. If it is not running, use the STRTCP command to start it. 12.1.2 SSAP values in the line description Use the DSPLIND command to check the Source Service Access Points (SSAP) values in the line description according to the type of communication used. Keep these points in mind: • You must have the following SSAP entries when attaching IPDS printers or ASCII printers using the PJL drivers (Version 3.0 Release 7.0 or later), or using remote system printing with a connection type of *IP: SSAP 12 *MAXFRAME *NONSNA SSAP AA *MAXFRAME *NONSNA 254 IBM AS/400 Printing V • The line description must contain the following SSAP entries when attaching ASCII printers using the Lexlink protocol (using a Lexmark Internal Network Adapter or a MarkNet XLe external LAN adapter): SSAP 12 *MAXFRAME *NONSNA SSAP 16 *MAXFRAME *NONSNA SSAP 1A *MAXFRAME *NONSNA 12.1.3 Pinging the TCP/IP address When the configuration is completed, test the TCP/IP connection using the PING command on the AS/400 system with the IP address of your printer: PING '123.1.2.3' • If the PING is successful, vary on the printer: VRYCFG CFGOBJ(Printer_dev) CFGTYPE(*DEV) STATUS(*ON) Then start the print writer (if not using a remote output queue): STRPRTWTR DEV(Printer_dev) If you are using a remote output queue, enter: STRRMTWTR DEV(Printer_dev) Print a job as a test (for example, a print screen). If this fails to print, continue with the following section. • If the PING fails, perform these actions: a. Verify the configurations of the AS/400 system, TCP/IP subsystem, the printer, and any intervening devices such as routers. Can you PING any of these devices? Then contact your LAN coordinator for assistance. b. Verify that the AS/400 LAN adapter card and printer hardware are fully operational. Use the WRKTCPSTS command to access a menu of useful commands, including the option to check whether the TCP/IP interface with your LAN adapter card is active (Work with TCP/IP interface status). c. Check the IP address of the printer LAN card (printer setup) and the one specified in the AS/400 configuration. 12.1.4 Port number The port number is important for connecting the printer. The value varies according to the printer type. The TCP/IP port parameter is in the PSF configuration object in Version 3.0 Release 2.0 and in the device description in Version 3.0 Release 7.0 and later. Note: The port number is a parameter of the WRKAFP2 command in Version 3.0 Release 1.0 and Version 3.0 Release 6.0. The following port numbers are used according to the printer type and the attachment method: 5001 IBM IPDS printer on the LAN (TCP/IP) 2501 IBM Network Printers (4312, 4317, 4324) and IBM Infoprint printers (4320, 4322, 4332) in ASCII mode, network-attached, and using the PJL print driver Chapter 12. Problem determination techniques 255 9100 IBM Infoprint Color 8, older IBM laser printers (for example, 4039), all HP, all Lexmark in ASCII mode, network-attached, and using the PJL driver Note: If the printer LAN attachment card (internal or external) has more than one entry, the port number can be 9100, 9101, or 9102. If none of these values is successful, consult your printer's manufacturer to determine if your printer has a dedicated port that accepts PCL/PJL commands. 12.1.5 Print Job Language (PJL) support The following printers support PCL and PJL and can be TCP/IP LAN-attached using the PJL drivers (Version 3.0 Release 7.0 and later): • IBM 4039 Plus and IBM Network Printer 12, 17, and 24 • Lexmark Optra family • HP LaserJet IIISi • HP LaserJet 4, 5, and 6 family Note: There is no PJL support on early IBM 4039 models nor on the HP LaserJet III. If PJL is not supported, the message CPD337F is returned. If in doubt, consult your printer's manufacturer to determine if your printer supports PCL and PJL. Use the *IBMPJLDRV driver for all IBM printers (for example, IBM Network Printer 12 and 17 and Infoprint 20 and 32). Note that IBM Infoprint 12 does not support PJL. Use the *HPPJLDRV driver for all HP and HP compatible printers. 12.1.6 Message PQT3603 The message PQT3603 is issued for a connection problem with a LAN-attached IPDS printer configured as AFP(*YES). The message PQT3603 includes the name of the printer, the remote location name, and an error code defining the failed condition. For an example, see Figure 203. Figure 203. Message PQT3603 Depending on the error code, perform the following recovery actions: • 10: The specified remote location name (RMTLOCNAME) was rejected. Specify a correct remote location name. This is either the IP address of the printer or its corresponding name in a host table entry list. Verify that the IP address at the device and the remote location name in the PSF configuration object (Version 3.0 Release 2.0) or in the printer device description (Version 3.0 Release 7.0 and later) are the same. If you are using a RMTLOCNAME name, check the IP address in the TCP/IP host table. • 15: The activation timer (ACTTMR) value configured for the device expired before the device was available. Message . . . . : Connection with device PRT1 cannot be established. Cause . . . . . : A session cannot be established with the device at RMTLOCNAME PRT1B02, using PORT 5001. The error code is 10. 256 IBM AS/400 Printing V Increase the value for ACTTMR in the PSF configuration object (Version 3.0 Release 2.0) or in the printer device description (Version 3.0 Release 7.0 and later), or determine if your network has a problem. • 22: The device did not respond to a connection request. The device may not be able to accept a connection request because: – Another writer (possibly on another system) is sending it data. – It is in the process of ending a connection with another writer. – It is in an error condition on another system. – The device is configured on another system where sharing of the device has not been configured. If the device has a connection with another writer or the device is in the process of ending a connection with another writer, this is a normal error code. Otherwise, verify that the port number (PORT) specified in the PSF configuration object (Version 3.0 Release 2.0) or in the printer device description (Version 3.0 Release 7.0 and higher) matches the port number specified at the device. If these values match, you may need to reset the device before starting the writer. Also refer to the information in the following point on error codes 20-39. If the problem continues, report it using the ANZPRB command. • 20-39: A communications failure occurred. Verify configuration values and check for problems in your network. Consider increasing the value specified for RETRY in the PSF configuration object used. If a PSF configuration object is not in use, create one (use the CRTPSFCFG command). In Version 3.0 Release 2.0, the PSF configuration object must have the same name as the printer. In Version 3.0 Release 7.0 and later, specify the name of the PSF configuration object in the printer device description using the USRDFNOBJ parameter. After correcting the problem, start the printer writer to begin processing again. If the problem continues, report it using the ANZPRB command. • 41-59: An internal failure occurred. These error codes (and especially 46) occur with: – A hardware problem on the printer. – Down-level printer microcode levels. Install the latest one for ETH or TR, CTL, and IPDS. Print the printer configuration page to see the level of the installed microcode. – Check any routers and their definitions, any switch box or hubs, and cabling. Note: If no error code is returned in the message PQT3603, perform the same actions as for error code 41-59. After correcting the problem, start the printer writer to begin processing again. If the problem continues, report it using the ANZPRB command. Chapter 12. Problem determination techniques 257 12.1.7 Configuring LAN-attached IPDS printers The configuration of IPDS LAN attached printers to an AS/400 system has changed with different versions and releases: • Configuring LAN-attached IPDS printers on Version 3.0 Release 1.0 On Version 3.0 Release 1.0, you need a device description (CRTDEVPRT) and a data area created by the WRKAFP2 command. You must first create the WRKAFP2 command. The instructions to create and use it are in the cover letter of PTF SF29961. The source code for the command is also included in this cover letter. The name of the data area must be the same as the printer name. • Configuring LAN-attached IPDS printers on Version 3.0 Release 2.0 On Version 3.0 Release 2.0, you need a device description (CRTDEVPRT) and a PSF configuration object (CRTPSFCFG). The name of the PSF configuration must be the same as the name of the printer. Note: If you migrate from Version 3.0 Release 1.0 to Version 3.0 Release 2.0, during the first Start Printer Writer (STRPRTWTR), a PSF configuration object is automatically created by the system and includes the WRKAFP2 data area values (used in V3R1). This PSF configuration object is placed in the library QGPL and has the same name as the printer device description. • Configuring LAN-attached IPDS printers on Version 3.0 Release 6.0 On Version 3.0 Release 6.0, you need a device description (CRTDEVPRT) and a data area created by the WRKAFP2 command. You must first create the WRKAFP2 command. The instructions to create and use it are in the cover letter of PTF SF31461. The source code for the command is also included in this cover letter. The name of the data area must be the same as the printer name. • Configuring LAN-attached IPDS printers on Version 3.0 Release 7.0 and later On Version 3.0 Release 7.0 and later, you need a device description (CRTDEVPRT) and a PSF configuration object (CRTPSFCFG). The name of the PSF configuration can be any name, but this object must be referenced in the USRDFNOBJ parameter of the device description. The RMTLOCNAME, PORT, and ACTTMR parameters are now part of the printer device description. However, they still appear in the CRTPSFCFG Version 3.0 Release 7.0 and Version 4.0 Release 1.0, but are not used here. Take care that you enter the values for these parameters in the correct place (that is, the device description). Note: If you migrate from earlier releases of OS/400 to Version 3.0 Release 7.0 or later, we recommend that you: – Delete existing printer device descriptions. – Delete existing data areas created by the WRKAFP2 command (V3R1 and V3R6). Re-create new printer device descriptions and new PSF configuration objects. For detailed information on printer configuration, see 11.1, “Configuring LAN-attached IPDS printers” on page 223. 258 IBM AS/400 Printing V 12.1.8 Configuring for remote system printing Some printers are unable to accept host printing commands directly, but must have them interpreted by another process. The line printer daemon (LPD) is one such common process, and is frequently used when printing to an ASCII LAN-attached printer using some kind of LAN adapter (for example, a JetDirect card or external box). The daemon runs inside the card and is regarded as another system as far as OS/400 is concerned. To print to such a remote “system”, you need to create a remote output queue using the normal Create Output Queue (CRTOUTQ) command. The most common problems result from the wrong print queue name. See 12.1.9, “Remote printer queue names” on page 258, for details. Also check for the correct destination options to avoid timeout problems or the wrong number of copies. This is covered in 8.2.2, “Destination options” on page 176. If host print transform is used and if the page size parameter in your printer file does not match a page size entry in the WSCST table, the letter format is used as the default format. In this case, the printer may display the message “Load Letter”. See 8.2.4, “‘Load Letter’ message on the printer” on page 179, for workarounds to this problem. Note: If possible, attach your remote ASCII printers using the PJL drivers (Version 3.0 Release 7.0 and later) instead of a remote output queue. This provides greater functionality. See 11.2.2, “Configuring LAN-attached ASCII printers using PJL drivers” on page 241, for detailed information. 12.1.9 Remote printer queue names If you are using a remote output queue with a connection type *IP and a destination type *OTHER to attach an ASCII printer using TCP/IP, you must specify the name of the remote printer queue on the target system. This name varies depending on the device supplying the LPD function. Note: This also applies if you use the SNDTCPSPLF command. Table 22 shows some of the more frequently-encountered printer queue names. Table 22. Internal print queue names for selected print devices Interface used Queue name HP JetDirect Card (internal) ‘text‘ for unformatted output ‘text‘ for formatted output HP JetDirect Server (external) (3 ports - 1 IP address) ‘text1‘ or ‘raw1‘ for port 1 ‘text2‘ or ‘raw2‘ for port ‘text3‘ or ‘raw3‘ for port Integrated Network Option (IBM 4039, 3112, 3116, Lexmark OPTRA) ‘pro0‘ Lexmark MarkNet XLe ‘/prt1‘ for parallel 1 ‘/prt2‘ for parallel 2 ‘/prt9‘ or ‘/ser‘ for serial port IBM Network Print Server ‘/prt1‘......‘/prt8‘ - 8 logical parts IBM Network printer (4312, 4317, 4324) IBM Infoprint Printers (4320, 4322, 4332) PASS (or TEXT if PASS does not work) Chapter 12. Problem determination techniques 259 Note: You must use these names for a successful connection. They are hard-coded into the LPD daemons, unlike OS/400 where an output queue name may be (almost) anything you want. 12.2 Printer-writer-related problems This topic relates to print writer problems. Normally, the job log may give you the necessary information to correct the problem (for example, prompting you to answer any non-answered messages). You can check the status of the writers using the WRKWTR command, the status of the spooled files with the WRKSPLF or WRKOUTQ command, and the status of the output queue with the WRKOUTQ command. The information provided by these commands is discussed in the following sections. 12.2.1 Print writer ends If the printer ends unexpectedly, a job log is sent to the QEZJOBLOG output queue (or two job logs for an AFP print writer). The reported messages can help you to find the problem. It may be as minor as someone switching off the printer (for example, to clear a paper jam). To check that a printer called NP17 is powered on and varied on, enter: WRKCFGSTS *DEV NP17 The display shown in Figure 204 appears. Figure 204. Work with Configuration Status display If the printer status is VARIED OFF, use option 1 (Vary On). Use the Help key for an explanation of the different statuses that are possible. IBM Infoprint 12 'raw' IBM Infoprint Color 8 PASS (or TEXT if PASS does not work) IBM 3130 ‘afccu2‘ Intel Netport XL TEXT1 for parallel port 1 TEXT2 for parallel port 2 Intel Netport Pro LPTx_PASSTHRU - Where x = port LPTx_TEXT - Where x = port UNIX/RISC printer queue name (case sensitive) Interface used Queue name Work with Configuration Status SYS00005 11/14/97 15:36:33 Position to . . . . . Starting characters Type options, press Enter. 1=Vary on 2=Vary off 5=Work with job 8=Work with description 9=Display mode status 13=Work with APPN status... Opt Description Status -------------Job-------------- NP17 VARIED ON 260 IBM AS/400 Printing V If you still have a problem, the following reasons are some of the typical causes of the writer ending: • Duplicate IP address After configuring a LAN-attached printer, the PING test may be OK, but when you try to print, the print writer ends immediately. In this case, check for a duplicate IP address: a. Disconnect the printer (at the printer end, for example, remove the LAN cable). b. Ping the IP address of the printer: PING '123.1.2.3' c. If the PING is successful, you have a duplicate address. Contact your LAN coordinator. If the PING is unsuccessful (as it should be), reconnect the printer and check the writer job log for any messages. • Message queue full The printer writer ends immediately after a STRPRTWTR command. No message or job log is available. This can happen when the message queue associated with the printer is full. When the message queue is full, even the normal start writer message cannot be written to the queue. Therefore, the writer ends. Use the DSPDEVD command to display the message queue name associated with the printer, and then use the WRKMSGQ command (change, view, and clear options available). • Activation timer - Release timer If you are sharing a printer with another system, the activation timer and the release timer can be the reason that the print writer ends. When sharing a printer, these two parameters must contain the following values: – ACTTMR: Activation Timer (printer device description). This parameter should be set to *NOMAX. With this value, you can wait indefinitely until another system using the printer releases it. – RLSTMR: Release Timer (PSF configuration object): This parameter should be set to *SEC15. Note: If this value is *NOMAX, the first system accessing the printer does not release it, and any other systems cannot use it. 12.2.2 Spooled files remain in RDY status Using the WRKWTR command, you can see that the writer is STR (Started), but the spooled file remains RDY (Ready) on your output queue with no printing. In this case, check the status of the output queue. Use the WRKOUTQ command, and check the status in the upper right corner of the Work with Output Queue display. The status must be RLS/WTR. See Figure 206 on page 263 for an example of this display. Chapter 12. Problem determination techniques 261 • If the status is HLD, the queue is held and no writer is started to the queue. Use the RLSOUTQ command to release the output queue. You can now start a writer to the queue using the STRPRTWTR command. • If the status is HLD/WTR, the queue is held and a writer is started to the queue. Use the RLSOUTQ command to release the output queue. • If the status is RLS, the queue is released, but no writer is started to the output queue. Start the writer using the STRPRTWTR command. If you have already done this, the writer is probably ending immediately. Refer to 12.2.1, “Print writer ends” on page 259. • If the status is RLS/WTR, the queue is released and a writer is started to the queue. Be patient! The status of the spooled file must change from RDY to WTR if the target printer is configured AFP(*NO), or from RDY to PND to WTR, and then to PRT if the target printer is configured AFP(*YES). The spooled file should then be printed. 12.2.3 Spooled file remains in PND status Using the WRKWTR command, you can see that the writer is in STR (Started) status, but the spooled file remains in PND (Pending) status in your output queue. Nothing is printed. In this case, the print driver job (PDJ) cannot establish a connection with the printer (it is waiting for an answer from the printer). Therefore, you need to complete these steps: 1. End the writer (see the next section). 2. Power off the printer. 3. Wait approximately 10 seconds (to avoid causing LAN problems). 4. Power on the printer. 5. Start the print writer again. 12.2.4 Ending the writer To end the writer immediately, enter the following command: ENDWTR WTR(printer_name) OPTION(*IMMED) This end of job forces a job log. If you forget the *IMMED option, you can issue the command again, but this time with the option. To end the writer abnormally (if the previous command does not work), enter: CALL QSPENDWA printer-name This is rarely needed. From a WRKOUTQ display, you can use option 9 (Work with printing status) to perform all the previous commands. However, using a guided, step-by-step process tells you exactly what to do next, instead of wondering whether to use WRKWTR, WRKSPLF, and so on. Learn to use this option. Tip 262 IBM AS/400 Printing V 12.2.5 Spooled file status Figure 205 shows the status for a spooled file from its creation up to its printing (or transmission to another system (remote system printing)). To check the status of a spooled file, use the WRKSPLF or the WRKOUTQ command. Figure 205 also shows the spooled file status when the target printer is configured AFP(*NO), AFP(*YES) with PSF/400, and if remote system printing is used. Figure 205. Spooled file status Figure 205 shows only some of the main statuses that are possible. A complete list of all spooled file statuses follows: OPN Open: The file has not been completely processed and is not ready to be selected by a writer. RDY Ready: The file is available to be written to an output device by a writer. DFR Deferred: The file has been deferred from printing. SND Sending: The file is being or has been sent to a remote system. CLO Closed: The file has been completely processed by a program, but SCHEDULE(*JOBEND) was specified and the job that produced the file has not yet finished. HLD Held: The file has been held. SAV Saved: The file has been written and then saved. This file will remain saved until it is released. PND Pending: The file is in the conversion phase, or pending to be printed. You can have more than one spooled file in PND status in an output queue. RDY or DFR RDY or DFR RDY or DFR PSF Applications Print Writer Data Stream Conversion Print Request Queue Print Driver Spool Spool Print Writer Print Writer Printer Printer AFP(*NO) Printer AFP(*YES) Remote System Output Queue Spool OPN WTR OPN PND WTR PRT OPN SND Chapter 12. Problem determination techniques 263 WTR Writer: This file is currently being produced by the writer on an output device. PRT Printing: The file has been sent to the printer, but print complete status has not yet been sent back to the system. MSGW Message Waiting: This file has a message that needs a reply or an action to be taken. The following status values with a * (asterisk) in front of them are displayed when an action is performed on the file as a result of selecting an option: *CHG Changed: This file was changed using option 2 (Change). *HLD Held: This file was held using option 3 (Hold). *RLS Released: This file was released using option 6 (Release). 12.2.6 Output queue status The status of the output queue can also tell you whether a writer is started to the queue. Use the WRKOUTQ command. The display shown in Figure 206 appears. Figure 206. Work with Output Queue display In the top right-hand corner, the Status field refers to the status of the output queue (RLS - Released) and the status of the print writer (WTR - Writing) in this example. The following list contains all of the output queue status. HLD Held: The queue is held. HLD/WTR Held/Writer: The queue is attached to a writer and is held. RLS/WTR Release/Writer: The queue is attached to a writer and is released. RLS Released: The queue is released, and no writer is attached. Work with Output Queue Queue: PRT01 Library: QUSRSYS Status: RLS/WTR Type options, press Enter. 1=Send 2=Change 3=Hold 4=Delete 5=Display 6=Release 7=Messages 8=Attributes 9=Work with printing status Opt File User User Data Sts Pages Copies Form Type P TESTOUTQ DBAS RDY 1 1 *STD MODEL JENNY RDY 1 1 *STD TESTIN LEGS HLD 1 1 *STD QSYSPRT DBAS HLD 1 1 *STD QSYSPRT SANDY HLD 1 1 *STD QSYSPRT SANDY HLD 346 1 *STD FAXPRT DEBBIE HLD 1 1 *STD Parameters for options 1, 2, 3 or command ===> F3=Exit F11=View 2 F12=Cancel F20=Writers F22=Printers F24=More keys 264 IBM AS/400 Printing V 12.2.7 AFCCU printers: Minimize delay when stopping and starting The AFCCU printers include Infoprint 60, Infoprint 62, Infoprint 2000, Infoprint 3000, and Infoprint 4000. They have a configuration option called “Clear Memory for Security”. This option can have a significant impact on the time required to start the printer after it has been stopped by PSF and the printer subsystem. To prevent unnecessary delay when starting and stopping AFCCU printers, set this option to “NO” unless you have extraordinary security requirements. “YES” requires the printer to zero out all print data storage when the printer is restarted. This is not required for normal security because pointers to the data are no longer active. This has been the standard for IPDS printers and, until AFCCU, has had little impact on performance. Now, with the large amount of storage in AFCCU printers, clearing can take several minutes, enough to make a noticeable difference to customers who start and stop their printers multiple times a day. Note: “YES” is the default setting for this option on all current AFCCU printers. There are plans to use “NO” as the setting for future printers. 12.2.8 QSTRUP execution during IPL This section contains references to the QSTRUP program that runs at IPL time. It is divided into two sections. The first section changes the message logging of the job log to increase job log information for diagnostic uses. The second section has example changes to the program pertaining to the spooling functions on the system. 12.2.8.1 Tracking the QSTRUP program at IPL Follow this process: 1. Analyze the problem. In this case, the writer is not starting during the startup routine at IPL. 2. Make a diagnosis. Check the QSTRUPJD job in the QEZJOBLOG output queue for messages relating to the device description not varied on or writer not starting. If the logging is not there or not complete enough, change the job description for the next IPL. Use the CHGJOBD QSTRUPJD command as follows: CHGJOBD JOBD(QSTRUPJD) LOG(4 0 *SECLVL) LOGCLPGM(*YES) Note: The job identifier in the QEZJOBLOG output queue is: job number/QPGPMR/QSTRUPJD. 12.2.8.2 Changing the QSTRUP program Follow this process: 1. On the OS/400 command line, type: DSPSYSVAL QSTRUPPGM This displays the name and library of the active startup program for the AS/400 system. It usually points to QSYS/QSTRUP. 2. On the OS/400 command line, type: RTVCLSRC PGM(QSYS/QSTRUP) SRCFILE(QGPL/QCLSRC) This retrieves the CL source of the startup program from step 1. 3. On the OS/400 command line, type: Chapter 12. Problem determination techniques 265 STRSEU SRCFILE(QGPL/QCLSRC) SRCMBR(QSTRUP) Edit the CL source that you extracted from the program. Look for QSYS/STRPRTWTR DEV(*ALL) in the source. This starts all the printers with the defaults. Insert the specific printer on a line just before the QSYS/STRPRTWTR command, for example: QSYS/STRPRTWTR DEV(printer_name) ALIGN(*FILE) Note: If the STRPRTWTR command is not being used, look for the QWCSWTRS program. This is an alternative approach to start writers. It checks to see if a device description is varied on before trying to start the writer. Review Informational APAR II09679 for details (this APAR can be downloaded as a PTF cover letter). This is a good solution for writers not starting at IPL. It loops through the device description 30 times to see if they are varied on. The STRPRTWTR command checks only once and then passes by. 4. On the OS/400 command line, type: CRTCLPGM PGM(QSYS/QSTRUP) SRCMBR(*PGM) Note: This writes over the system default QSTRUP program. If you do not want to overwrite it, proceed to the next step. 5. If you do not want to overwrite the default QSTRUP program, on the OS/400 command line, type the following command: CRTCLPGM PGM(library/QSTRUP) SRCMBR(*PGM) Note: A good choice for the library is QGPL. 6. Change the system value QSTRUPPGM to refer to the new program. On the OS/400 command line, type: CHGSYSVAL QSTRUPPGM VALUE('QSTRUP library') At the next IPL, the printers should be started correctly. 12.3 Where your print output goes The elements that control printing have a defined hierarchy. Figure 207 on page 266 shows that hierarchy. In the diagram, you can see that the system looks at the elements in this order: printer file, job description, user profile, workstation description, and system value. The system looks first for the output queue and print device in the printer file. It is important to know and remember the following conditions: • If the spooled parameter is set to *YES in the printer file, the output must go to an output queue. In this case, the first output queue name specified (according to the hierarchy) is used. • If the spooled parameter is set to *NO in the printer file, the output must go to a device. In this case, the first device specified (according to the hierarchy) is used. 266 IBM AS/400 Printing V Figure 207. Hierarchy of the elements controlling printing In the example shown in Figure 208, we assume that the SPOOL parameter is set to *YES. This means the system will search for an output queue. The first one found according to the hierarchy of the printing elements is PRT04 in the job description. PRT04 is used as the output queue by the application. Figure 208. Example of where your print output goes 12.4 Spooled file goes to hold status If the spooled file is in a hold condition, a message is generated in the QSYSOPR job log. To see the reported message, type: DSPMSG QSYSOPR Printer File Spool the Data: *Yes or *NO Output Queue: *JOB Job Description Output Queue: *USRPRF User Profile Output Queue: *WRKSTN Workstation Description Output Queue: *DEV Job Description Printer Device: *USRPRF User Profile Printer Device: *WRKSTN Workstation Description Printer Device: *SYSVAL Print File Printer Device: *JOB QPRTDEV System Value Printer Device: PRT01 PRT01 Output Queue System Value Printer File Job Description User Profile OUTQ(*JOB) DEV(PRT06) OUTQ(PRT04) DEV(*USRPRF) OUTQ(PRT03) DEV(*WRKSTN) OUTQ(JENNIFER) DEV(PRT07) QPRTDEV(PRT01) WS Description PRT04 Chapter 12. Problem determination techniques 267 Then locate the message and perform the appropriate action. Many factors can cause the spooled file to be held (for example, directing a spooled file to a printer not supporting the data stream, a negative acknowledgment reported by an IPDS printer, or AFP resources not found). The printer writer is trying to help you by not allowing you to print invalid or missing data! In the QSYSOPR message queue, you see message CPF3395 (Figure 209), indicating that the spooled file was held. Figure 209. Message CPF3395 Another message just before CPF3395 gives the cause of the error. Some examples are illustrated in the following sections. 12.4.1 Writer cannot re-direct the spooled file If you submit a spooled file to a printer not supporting the data stream of the spooled file, processing stops, and the writer holds the spooled file. Figure 210 shows message CPI3379 returned when trying to print an AFPDS spooled file to a printer configured as *IPDS, AFP(*NO). Figure 210. Message CPI3379 Regarding the previous recovery information, note that to change the device description, you must first vary off the device. Therefore, the sequence is: end writer, vary off device, change device description, vary on, and finally start writer. In the QSYSOPR message queue, this message is followed by message CPF3395 (spooled file held by writer). Depending on the spooled file data stream and the target printer, the error message returned can be CPI3370, CPI3372, CPI3373, CPI3376, or CPI3377. Message ID . . . . . . : CPF3395 Severity . . . . . . . : 60 Message type . . . . . : Information Date sent . . . . . . : 11/16/97 Time sent . . . . . . : 10:08:40 Message . . . . : File QSYSPRT held by writer PRT02 on output queue PRT02 in QUSRSYS. Cause . . . . . : Writer PRT02 held file QSYSPRT number 2 job 026403/ITSCID17/QPADEV0010 on output queue PRT02 in QUSRSYS. The next file was processed. Message ID . . . . . . : CPI3379 Severity . . . . . . . : 30 Message type . . . . . : Information Date sent . . . . . . : 11/16/97 Time sent . . . . . . : 10:08:40 Message . . . . : Writer PRT02 cannot re-direct file QSYSPRT to device PRT02. Cause . . . . . : Writer PRT02 could not re-direct file QSYSPRT number 2 job 026403/ITSCID17/QPADEV0010 to device PRT02. Advanced function printing data stream (AFPDS) data cannot be converted to the format required to produce the file on that device. Recovery . . . : File QSYSPRT can only be produced on a printer supported by advanced function printing (AFP). If device PRT02 can be started with the AFP specified as *YES, stop the writer, change the device description for the printer (CHGDEVPRT command) by specifying the AFP parameter as *YES, and start the writer again. 268 IBM AS/400 Printing V 12.4.2 Message PQT3630 Message PQT3630 is returned when an error occurs during the processing of an IPDS spooled file directed to a printer configured as *IPDS, AFP(*YES). The QSYSOPR message queue shows the sequence of messages presented in Figure 211. Figure 211. QSYSOPR message queue The message CPF3395 “File QSYSPRT held by writer NP17 on output queue NP17 in QUSRSYS” gives information on the writer action. To see the cause of the error, the message PQT3630 “Device NP17 returned negative acknowledgment with sense data” must be analyzed. Press F1 to display the additional message information shown in Figure 212. Figure 212. Message PQT3630 The sense data is the negative acknowledgement (NACK) returned by the printer to the writer (in this case, PSF/400). Six classes of data stream exceptions are returned by the printer. They are: • Command reject • Intervention required • Equipment check • Data check • Specification check: – IO images – Barcodes – Graphics – General • Conditions requiring host notification In this example, the NACK returned is “08C1”, the first two bytes of the sense data. Refer to IBM Intelligent Printer Data Stream Reference, S544-3417, or to the IPDS manual of the printer for an explanation of the exception ID. Device NP17 returned negative acknowledgment with sense data. Data Check at printer NP17. Printing of file QSYSPRT by writer NP17 not complete. File QSYSPRT held by writer NP17 on output queue NP17 in QUSRSYS. Message ID . . . . . . : PQT3630 Severity . . . . . . . : 10 Message type . . . . . : Information Date sent . . . . . . : 11/16/97 Time sent . . . . . . : 10:33:14 Message . . . . : Device NP17 returned negative acknowledgment with sense data. Cause . . . . . : Sense data X'08C10100 DE010001 00000000 D62D0101 01010000 00000001' was received from device NP17. Recovery . . . : See messages that follow for additional information about the error condition. The data stream manual for your printer contains more information about the sense data. Technical description . . . . . . . . : The internal message identifier (ID) is CNACK101. Chapter 12. Problem determination techniques 269 Table 23 shows an example of the exception ID from an IPDS reference manual. Table 23. Data check exceptions According to Table 23, exception “08C1” is a position check. This means you are trying to print outside the physical page. The cause is that the page size defined in the printer file is larger than the physical page size (paper), and the FIDELITY parameter is set to *ABSOLUTE in the printer file. This is discussed in the next section. 12.4.3 Fidelity parameter The fidelity parameter in the printer file specifies whether printing continues when print errors are found for printers configured with AFP(*YES). Two values are possible for this parameter: *CONTENT Printing continues when errors are found. *ABSOLUTE Printing stops when errors are found. If the fidelity is set to *ABSOLUTE and any AFP resources, such as fonts, overlays, or page segments referenced in the spooled file are not available, the spooled file is held by the writer. The QSYSOPR message queue shows the sequence of messages presented in Figure 213. Figure 213. QSYOPR message queue: Resource object not found For the message PQT0012 “The resource object PS1 was not found for user USER01”, press F1 to display the additional message information. The possible causes for this problem include: • AFP resources are not in the system. • The library containing the resources is not in the library list. • Fonts are not available or are not available in the printer resolution. 12.5 Copying spooled files You can use the Copy Spooled File (CPYSPLF) command to copy a spooled file to a physical file. But, if the spooled file is *USERASCII, *AFPDS, *LINE, or *AFPDSLINE (determined by the DEVTYPE parameter on the printer file), you cannot copy the spooled file. If the spooled file is *IPDS, you can copy it, but the data stream cannot contain any special device requirements such as fonts, barcodes, or rotated text. Exception ID Description Action code X’0821.00’ Undefined character 01 X’0860.00’ Numeric representation precision check 01 X’08C1.00’ Position check 01 ....................................................... The resource object PS1 was not found for user USER01. Spooled file QSYSPRT did not print. File QSYSPRT held by writer NP17 on output queue NP17 in QUSRSYS. ................................................................ 270 IBM AS/400 Printing V One other possibility is to use the Get Spooled File (QSPGETSP) API to get the data from an existing spooled file. Data is retrieved from the existing spooled file by a buffer (one or more) and is stored in a user space. For detailed information on the QSPGETSP API and other spooled file APIs, see AS/400 System API Reference, SC41-5801. The third possibility is to use the QSPGETF system program. To place the copied spooled file back into an output queue, you can use the QSPPUTF system program. Authority to these system programs is *PUBLIC *EXCLUDE. • System program QSPGETF has the following five parameters. All character parameters must be entered in uppercase and be enclosed in quotation marks. The database file and member are created if they do not exist prior to the call. 1- 10 Character spooled file name. 2- 20 Character qualified database file name in which to dump the spooled file. The first 10 characters contain the database file name. The second 10 characters contain the database file library name. 3- 26 Character qualified job name of the job that created the spooled file. The first 10 characters contain the job name. The second 10 characters contain the job user. The last six characters contain the job number. 4- Numeric spooled file number 1 through 9999. If using the call interface, specify the spooled file number as a hex value as: X'0001' to X'270F' for spooled file numbers of 1 through 9999. 5- 10 Character database file member name in which to dump the spooled file. The following example dumped spooled file, QPRINT, to database file, SPOOLDB, and member, MBR1. The spooled file number was 1. You can enter the information on the command line or prompt on the call command to enter the parameters. CALL PGM(QSYS/QSPGETF) PARM('QPRINT ' 'SPOOLDB USER1LIB ' 'DSP03 USER1 010160' X'0001' 'MBR1 ') • System program QSPPUTF has the following three parameters. All character parameters must be entered in uppercase and be enclosed in quotation marks. 1- 20 Character qualified database file name from which to re-spool the spooled file. The first 10 characters contain the database file name. The second 10 characters contain the database file library name. 2- 20 Character qualified output queue name to which to re-spool the spooled file. The first 10 characters contain the output queue name. The second 10 characters contain the output queue library name. 3- 10 Character database file member name to which to re-spool the spooled file. The following example re-spooled a previously dumped spooled file from database file SPOOLDB and member MBR1 to output queue USER1. You can enter the information on the command line or prompt on the call command to enter the parameters. Chapter 12. Problem determination techniques 271 CALL PGM(QSYS/QSPPUTF) PARM('SPOOLDB USER1LIB ' 'USER1 QGPL ' 'MBR1 ') 12.6 Problem with output presentation Many presentation problems are related to the position of the data on the page, the printer's unprintable border, or to the page rotation parameter in the printer file. 12.6.1 Physical page: Logical page The physical page is the format of the paper loaded in the printer. The logical page size is from the printer file page size parameter. 12.6.1.1 Physical page size same as logical page size In the example shown in Figure 214, the physical page size is the same as the logical page size. • With a rotation of 0 degrees, all the physical, logical, overlay, and data origins are at the top left corner of the paper. • With a rotation of 90 degrees, the logical page and the overlay are positioned from the physical page origin at the bottom left corner of the paper. Data positioning is from the top left corner of the logical page. Figure 214. Physical page same as logical page Note: We recommend that you set the physical page size equal to the logical page size to avoid data position problems. 12.6.1.2 Logical page smaller than physical page In the example shown in Figure 215 on page 272, the logical page size is smaller than the physical page size. Physical Origin Logical Origin Data Origin Overlay Origin Physical Origin Logical Origin Overlay Origin Data Origin Rotation 0 Degrees Rotation 90 Degrees 272 IBM AS/400 Printing V Figure 215. Logical page smaller than physical page With a rotation of 0 degrees, all the physical, logical, overlay, and data origins are at the top left corner of the paper. The data is properly positioned. With a rotation of 90 degrees, the logical page and the overlay are positioned from the physical page origin at the bottom left corner of the paper. Data positioning is from the top left corner of the logical page. You will encounter a data position problem. 12.6.1.3 Logical page larger than physical page In the example shown in Figure 216, the logical page size is larger than the physical page size. Figure 216. Logical page larger than physical page With a rotation of 0 degrees, all the physical, logical, overlay, and data origin are on the top left corner of the paper. The data is properly positioned. Physical Page Logical Page Physical Origin Logical Origin Data Origin Overlay Origin Physical Origin Logical Origin Overlay Origin Data Origin Rotation 0 Degrees Rotation 90 Degrees Physical Page Logical Page Physical Origin Logical Origin Data Origin Overlay Origin Physical Origin Logical Origin Overlay Origin Data Origin Rotation 0 Degrees Rotation 90 Degrees Chapter 12. Problem determination techniques 273 With a rotation of 90 degrees, the logical page and the overlay are positioned from the physical page origin on bottom left corner of the paper. Data positioning is from the top left corner of the logical page. You will encounter a data position problem, as the top lines of the print output are outside the physical page. You may lose part of your print output. 12.6.2 Printer setup Some printers have an unprintable border and the logical page is positioned at the edge of the printable area instead of the edge of the physical page (Figure 217). Figure 217. Unprintable border Printer setup parameters, such as Page=Print, Edge-to-Edge, VPA Check, and the QPRTVALS data area, allow you to move the origin from the edge of the printable area to the edge of the physical page, or to control its effect. Note: If you have printers without an unprintable border and printers with an unprintable border, having the origin at the same place ensures the same presentation on both types of printer. For detailed information, see Chapter 10, “IBM AS/400 network printers” on page 205, AS/400 Printing III, GG24-4028, and AS/400 Printing IV, GG24-4389, for various models of printers. 12.6.3 Computer Output Reduction If you specify a page rotation of *AUTO or *COR in the printer file, and your data cannot fit on the page because your logical page is larger than the physical page, the Computer Output Reduction (COR) function is used (Figure 218 on page 274). *COR always uses CORing, regardless of page size (unlike *AUTO). Origin Printable Area Logical Origin Data Origin Overlay Origin Physical Origin Physical Origin Logical Origin Data Origin Overlay Origin Unprintable Border 274 IBM AS/400 Printing V Figure 218. Computer Output Reduction The COR function rotates your page 90 degrees and prints your data with a smaller font. For example, a 15 cpi or 17.1 cpi is used. Note: To avoid non-desired COR, selected a rotation value of 0, 90, 180, or 270 degrees in the printer file. 12.6.4 A3 page support Before A3 paper became a commonly-supported page size, PSF/400 was limited to a maximum page size of 11.3 inches for the short side and 14 inches for the long side of the logical page. The effect of this was that output was truncated with a printer file specifying a page size of over 140 characters at 10 cpi, 168 characters at 12 cpi, 210 characters at 15 cpi, and so on. A PTF is now available to allow larger page sizes for printers that support A3 paper size. It is only effective for printers configured as *IPDS, AFP=*YES. Note: *IPDS, AFP=NO printers do not have this problem because they take the logical page size from the printer file in this case. The APAR number for Version 3.0 Release 7.0 and V4R1 is SA64384. PTF numbers are SF44581 and SF44098, respectively. At the time this redbook was written, support had not been added for earlier releases of OS/400. 12.7 Font problems Many messages are related to fonts, and most simply report that a font substitution was performed (for example, the message PQT2072). See Figure 219. Computer Output Reduction Rotation *AUTO or *COR Computer Output Reduction COR Chapter 12. Problem determination techniques 275 Figure 219. Font substitution message PQT2072 You can also receive other font substitution messages, such as: PQT2066 Font substitution was performed. Your print request referred to a resident font, and resident fonts are not supported by this printer. PQT3531 Font substitution was performed. Your print request referred to a character set, and this printer only supports resident fonts. PQT3533 Font substitution was performed. Your print request referred to a character set with an incompatible resolution to the printer. PQT3535 Font substitution was performed. Your print request referred to a character set and code page. The code page could not be found. PQT3537 Font substitution was performed. Your print request referred to a character set and code page. You are not authorized to use the character set. PQT3539 Font substitution was performed. Your print request referred a character set and code page. You are not authorized to use the code page. PQT3541 Font substitution was performed. Your print request referred to a raster character set, and you requested that outline fonts be used when possible. PQT3542 Font substitution was performed. Your print request referred to a character set and code page. The device does not support outline fonts. PQT3543 Font substitution was performed. Your print request referred to character set at one resolution, and a character set with this resolution cannot be found. PQT3544 Font substitution was performed. Your print request referred to an outline font, but outline fonts are not supported by the printer. On each message, there is useful information about the font resources with the problem. Note: In Version 4.0 Release 2.0 and later, a parameter in the PSF configuration object allows you to suppress logging the font substitution messages. Message ID . . . . . . . . . : PQT2072 Message file . . . . . . . . : QPQMSGF Library . . . . . . . . . : QSYS Message . . . . : Font substitution was performed. Cause . . . . . : Your print request for file &1 number &2 in job &5/&4/&3 referred to resident character set (FGID) 10 and resident code page 0037. These resident resources are not present in printer PRT01. A font substitution was performed that keeps as many characteristics as possible of the originally requested font. Resident character set (FGID) 11 and resident code page 0037 were substituted. A value of *DFLT for the substituted character set (FGID) or code page means that the printer default was used. If you specified absolute fidelity, processing of the print request ended. If you specified content fidelity, the substitution was performed, and processing of the print request continued. 276 IBM AS/400 Printing V For more information on fonts, font tables customization, outline fonts, and font substitutions, see Chapter 4, “Fonts” on page 89. 12.7.1 Problems with shading at different resolutions If you create an overlay with AFP Utilities/400, AFP Driver, or other tools, you have the option to shade inside a box. The shaded element that is created is often a raster pattern that depends on the pel density of the printer to which it is going. If the density does not match, you may notice some or all of these symptoms: • You receive message PQT3513 that states “The resolution of an image does not match the resolution of the printer”. If the printer file is set to *Absolute fidelity, the file is held. If the fidelity is set to *Content, the page will print, but the shading may be distorted. • The distortion is most apparent if you have shading that was generated for a 300-pel printer but is printed on a 240-pel printer. There is a noticeable “waffle” pattern in the output. If you have shading that was generated for a 240-pel printer printing at 300-pel, the texture might change somewhat, but it is not as bad. • There may be performance degradation. If you are on V3R1, check for PTF SF44977. This fix was included in all other current releases. Possible solutions are: • With V3R7 or later, you can use the PSF configuration object to specify the Device Resource Library List. That way you can create two versions of the overlay, one for each density, and have them in different libraries. Then you list the appropriate library in each printer's DEVRSCLIBL parameter. • If you are on an earlier release and need to print on printers at different densities, we recommend that you create the resource at 240-pel to print on the 300-pel printers. This avoids the waffle effect. 12.8 Drawer and paper path selection problems To use the drawer selection for a printer, the FORMFEED parameter must be set to *AUTOCUT. If this is not done, the DRAWER parameter is ignored (because, for example, PSF/400 believes it is using a printer with continuous forms). The FORMFEED parameter is in the printer file and also in the printer device description (the parameter in the printer file may default to *DEVD). Note: The Facsimile Support/400 product uses the drawer number to specify the format of the facsimile (for example, drawer 1=letter, and drawer 3=A4). In this case, the FORMFEED parameter must also be set to *AUTOCUT. 12.8.1 IBM 4247 paper path selection The IBM 4247 printer can be configured in 4230/4224 emulation or in native mode. The paper path selection varies from one mode to the other. 12.8.1.1 4230/4214 emulation mode For 4230/4214 emulation, only one attachment may be on the printer at a time. Chapter 12. Problem determination techniques 277 If you want to use the automatic sheet feeder, it is best that you run in 4230/4214 emulation mode. For automatic sheet feeder, specify FORMFEED(*AUTOCUT), DRAWER(n) on the printer file, where n is: 1 Drawer 1 2 Drawer 2 3 Drawer 3 For 4230/4214 continuous forms, specify FORMFEED(*CONT) on the printer file. 12.8.1.2 4247 native mode For 4247 mode, multiple attachments may be on the printer at the same time. However, in this mode, the drawer selection number for the automatic sheet feeder has changed. Specify FORMFEED(*AUTOCUT), DRAWER(n) on the printer file, where n is: 5 Drawer 1 6 Drawer 2 7 Drawer 3 For 4247 front continuous forms attachment, specify FORMFEED(*CONT) in the printer file. For 4247 rear continuous forms attachment (Version 3.0 Release 1.0 and Version 3.0 Release 6.0), specify FORMFEED(*AUTOCUT) DRAWER(2) in the printer file. For 4247 rear continuous forms attachment (Version 3.0 Release 2.0 and Version 3.0 Release 7.0 and later), specify FORMFEED(*CONT2) or FORMFEED(*AUTOCUT) DRAWER(2) in the printer file. 12.9 Printing on ASCII printers The following considerations are for printing AS/400 spooled files to ASCII printers: • Use the host print transform function in place of an emulator (PC or display). There are more printer functions, such as the AFPDS to ASCII transform, and the transform table can be customized. For detailed information on host print transform, see 1.3.3, “Host print transform” on page 13. • In the host print transform table, select the emulation or driver according to your printer type and model. • Check the printer setup (code page, paper format, timeout, and so on). • Check that your printer file parameters reflect your ASCII printer capabilities (for example, page size (A3 supported?), duplexing (supported?), and available fonts). • Refer to the ASCII printer technical manual for available fonts, size of the unprintable border, maximum lines per page, maximum characters per line, and so on. 278 IBM AS/400 Printing V 12.10 Additional information Because program temporary fixes (PTFs) might be superseded rapidly, check the PTF numbers provided in this document for their accuracy. The World Wide Web provides lists with recent PTF numbers and microcode levels for IBM printers. These lists can be found at: http://www.printers.ibm.com/products.html Subsequent ones are, for example: • Hints and tips: Contains technical items or “flashes”. • Service planning: Contains the minimum and current microcode level of the IBM Network Printers. • Service notes: Gives a list with recommended OS/400 PTFs for printers configured with AFP functions. Alternatively, your IBM representative should be able to provide a list of required PTFs. © Copyright IBM Corp. 2000 279 Appendix A. PSF/400 performance factors This appendix considers factors relating to printing performance on the AS/400 system, in approximate order of importance, starting with the most significant to the least significant. Which factors have the most affect on your system printing depends on your particular system and printer configuration, as well as the type of spooled files you are printing. A.1 AS/400 system storage The amount of system storage (memory) allocated to the *SPOOL pool is crucial for successful AFP printing. The minimum for AFP printing should be 2000 KB to 3000 KB (that is, 2 MB to 3 MB). For AFP printers operating simultaneously, consider allocating 500 KB to 1000 KB more for each additional printer. If you are using LPR/LPD printing (for example, with a remote output queue), start with at least 6 MB in *SPOOL. You can check the storage allocated on the Work with System Status (WRKSYSSTS) display. To identify the *SPOOL pool, press F11 twice to produce the Work with System Status display shown in Figure 220 on page 280. In this example, the setting of the QPFRADJ (Performance Adjustment) system value has automatically allocated storage across the storage pools. The system value controls whether automatic balancing of memory is done and when it is done (at IPL, during normal operations, or both). If you do not use automatic adjustment, you can monitor the *SPOOL pool for excessive page faulting, and even change the pool size “in flight”, although you are only taking it from another pool that may have a greater requirement (for example, a batch or interactive job). Be aware that the automatic adjustment may be too slow in responding to use the printing subsystem, especially for smaller jobs. On systems running Version 4.0 Release 1.0 or later, you can use the Work with Shared Pools (WRKSHRPOOL) command to assign minimum and maximum percentage values for *SPOOL (use the F11 key marked (Display tuning data)). If auto-tuning is set on through QPFRADJ, these limits may be adjusted automatically. The default minimum percentage is 1% of the total main storage. In the example shown in Figure 221 on page 281, the total system storage is 4718592 KB, and the minimum percentage size for *SPOOL has been set at 10%. 280 IBM AS/400 Printing V Figure 220. Work with System Status: Displaying pool names A.2 Data stream type By default, AS/400 printer files use SNA Character String (SCS) as the data stream type. This type of data stream can be sent to any printer, including ASCII printers using the SCS-to-ASCII host print transform. SCS spooled files can also be sent to printers configured as *IPDS, AFP=NO, and *IPDS, AFP=*YES. The print writer handles this automatically. It looks at the printer's device description and transforms the SCS spooled file into the appropriate data stream. For IPDS printers configured AFP(*YES), the standard process includes the following steps: 1. An SCS spooled file sent to an IPDS printer is: a. Converted to generic IPDS. b. Converted to AFPDS. c. Converted into printer-specific IPDS. The converted spooled file is then sent to the printer. 2. An IPDS spooled file is: a. Converted to AFPDS. b. Converted into printer-specific IPDS. The converted spooled file is then sent to the printer. 3. An AFPDS spooled file is converted directly into printer-specific IPDS format. The converted spooled file is then sent to the printer. Work with System Status LUCYHH05 12/01/97 16:25:34 % CPU used . . . . . . . : 2.8 Auxiliary storage: Elapsed time . . . . . . : 00:00:01 System ASP . . . . . . : 67.71 G Jobs in system . . . . . : 19243 % system ASP used . . : 35.1832 % addresses used: Total . . . . . . . . : 67.71 G Permanent . . . . . . : .007 Current unprotect used : 467 M Temporary . . . . . . : .010 Maximum unprotect . . : 487 M Type changes (if allowed), press Enter. System Pool Reserved Max Pool Size (K) Size (K) Active Pool Subsystem Library 1 415780 244856 +++++ *MACHINE 2 3545416 0 204 *BASE 3 47184 0 4 *SPOOL 4 710212 0 87 *INTERACT Bottom Command ===> F3=Exit F4=Prompt F5=Refresh F9=Retrieve F10=Restart F11=Display paging option F12=Cancel F24=More keys This is explained in 1.3, “Printer writer” on page 6. Note Appendix A. PSF/400 performance factors 281 The conversion for SCS and IPDS are there to ensure complete fidelity of the result. For example, this ensures that if a front overlay was specified in the printer file of an SCS spooled file, the overlay comes across in the conversion. Obviously, there is time and system processor cycles involved in the SCS and IPDS conversions. With Version 3.0, a new customizing option (called IPDS Pass Through) enables control over SCS and IPDS conversions to reduce the conversion time. See A.2.1, “IPDS pass through” on page 282, for more information. These conversions are illustrated in Figure 221. Notice how the size of the shaded box decreases depending on the data stream type specified. This represents the reduced work the AS/400 processor has to perform. Figure 221. Data stream transforms when printing to an IPDS AFP(*YES) printer Generally speaking, if your output is to contain AFP resources, such as overlays, page segments, and host font character sets, specify *AFPDS in the printer file. You frequently need to do this, in any case, to obtain support for certain DDS keywords. If you are printing to a printer configured as *IPDS, AFP=NO, code the data stream type as *IPDS (for example, an IPDS impact printer). This data stream has several restrictions (these restrictions are discussed in 1.3, “Printer writer” on page 6). Datastream type SCS IPDS AFPDS Print Writer Print Writer Print Writer Data Stream Converter SCS IPDS AFPDS IPDS Data Stream Converter IPDS AFPDS IPDS Data Stream Converter AFPDS IPDS Print Driver Print Driver Print Driver IPDS Printer AFP(*YES) IPDS Printer AFP(*YES) IPDS Printer AFP(*YES) 282 IBM AS/400 Printing V Leave the data stream type as *SCS if your output is straightforward (reports, listings, for example) and can be printed on any of the printers in your organization. A.2.1 IPDS pass through This parameter is available on the WRKAFP2 (V3R1/V3R6) and WRKPSFCFG (V3R2/V3R7 and later) commands described in Chapter 11, “Configuring LAN-attached printers” on page 223. It cuts down on some of the internal transforms described previously (for example, an SCS spooled file is converted directly to printer-specific IPDS, and an IPDS spooled file does not require any conversion). There are some restrictions as to its use such as spooled files with overlays, image data, or software multi-up. However, in these cases, the normal transforms will occur. Therefore, for a printer configured as *IPDS, AFP=*YES, set the IPDSPASTHR parameter to *YES. A.2.2 Printer device description parameters These settings are related to the data stream conversion carried out by the AS/400 processor. They obviously apply only to individual printers. • Print while converting: This should be set to *YES so that pages in a large spooled file may start to print before the entire process of conversion has completed. You may also want to adjust the priority of the writer job, for example, by using: WRKACTJOB SBS(QSPL) Then, change the job priority for the WTR job for your printer in the range 0 (highest priority) through to 9 (lowest priority). This allows you some control over the conversion process. • Maximum pending requests: This refers to the number of spooled files that may be converted by the AS/400 processor for each printer at any one time. The default value is 6. If you are regularly printing many small (one page to five page) spooled files to a fast printer (20 ipm to 30 ipm), you may want to increase this value. If you are printing larger spooled files (300 pages and more), you may want to decrease this value slightly. The main effect is on disk usage. A.3 AFP resource retention Since Version 3.0 Release 1.0, PSF/400 automatically stores downloaded AFP resources in an IPDS printer across job boundaries subject to memory constraints. This is on the likely chance that the succeeding job can also reference one or more of the previous job's AFP resources. This cuts down on resource download time and, therefore, the overall throughput of the job. Note that this is possible because the AFP print job contains only references to the AFP resources. These may or may not actually be present in the data stream. Resource retention may be switched off if required, using the RSCRET parameter in the WRKPSFCFG command or the DRR parameter in the WRKAFP2 command. The default in each case is for resource retention to be enabled. Appendix A. PSF/400 performance factors 283 A.3.1 Clear memory for security Some AFP printers, including the AFCCU printers, have a similar hardware feature called “Clear Memory for Security”. This flushes the printer memory between each print job and, therefore, should be set to *NO. IBM AFCCU printers are shipped with this feature enabled, so it is worth checking the printer operator panels to ensure it is disabled. A.4 Font types Typically, when a font is downloaded to a printer, it is a raster (bitmapped) image containing the entire character set. Outline (scalable) fonts contain only the vector instructions for drawing the selected characters. Therefore, using outline fonts reduces the download time considerably. This is more noticeable when printing large characters because the printer's control unit scales the outline font to the requested point size. Techniques for working with font performance are described in the following sections: • Section 4.5.1, “Downloading host-resident outline fonts” on page 100 • Section 4.5.2, “Why use an outline font” on page 100 • Section 4.10, “Font capturing” on page 108 • Section 5.5, “Text versus image” on page 129 At the present time, downloading outline fonts is only possible with IBM AFCCU printers. A.4.1 Using GDDM fonts Strictly speaking, these are not fonts, but graphical symbol sets (object type *GSS) found in the QGDDM library shipped with the OS/400 operating system. They are used in a similar manner, for example, using the FONT keyword and specifying a graphical symbol set such as ADMWMOB (Multi-National Open Block). The results are smooth, rounded characters scaled to the size specified with the CHRSIZ keyword. They are referenced by the name of the graphical symbol set (for example, in Figure 222). Figure 222. DDS record format specifying a GDDM font The penalty is that they take longer to produce and print than raster or printer-resident scalable fonts. This is particularly noticeable on IPDS impact printers where text appearance is lower in priority in any case. Shipping documentation is a typical example. Fast printer throughput is usually the aim as long as the enlarged output is readable. There is significantly faster performance if you use an outline font and then scale it to the required size using a point size (for example, using Helvetica Bold). See Figure 223 on page 284. 0030 A R TXT1 0031 A LIN01 1A 0032 A FONT(ADMWMOB) 0033 A CHRSIZ(2.0 3.0) 284 IBM AS/400 Printing V Figure 223. DDS record format using a printer-resident outline font On a printer that does not have outline fonts (such as an IPDS impact printer), specify a resident font, but use the CHRSIZ keyword to scale it. The quality of the character shape is not as good. The appearance is “blocky”, but printing is faster than if using a GDDM font. CHRSIZ is not supported on the AFCCU printers, but these have outline fonts in any case. A.5 Library list searches You can help PSF/400 locate AFP resources quickly by placing AFP resources in user library lists (USRRSCLIBL) and device resource library lists (DEVRSCLIBL). The two parameters refer to those in the PSF configuration objects associated with printer device descriptions. An example of an AFP resource placed in a user resource library might be a user's signature stored as a page segment called USERSIG. Therefore, the printer file can reference the page segment by this name. Which signature is printed depends on the user submitting the job. An example of using the device resource library might be to store different versions of the same overlay by device resolution (240 or 300 dpi). Which overlay is used depends on the printer to which the job is sent. Generally speaking, the higher in the library list an AFP resource appears, the better. In addition, explicitly specify a resource where possible (for example, MYLIB/INVOICE to specify an AFP overlay), rather than *LIBL/INVOICE. A.6 Creating efficient AFP resources Some tools are more efficient than others at producing AFP resources. As an unscientific rule of thumb, the easier and more user-friendly the tool is, the less-efficient the resource is! AFP Utilities/400 is native to the AS/400 system, offers a near-WYSIWYG approach to designing overlays, and produces relatively efficient AFP resources in terms of file size and speed of printing. The AFP driver allows you to produce overlays using sophisticated PC functions, but if the driver is set up to produce a resource entirely composed of image data, the download and print speed is noticeably reduced. The answer is to create such an overlay using text components wherever possible (see 5.5, “Text versus image” on page 129). General principles for such tools are that rounded elements, such as curves and rounded boxes, take longer to produce than square elements, and dotted or dashed lines take longer to print than solid lines. The reason for this is that straight lines may often be produced using text IPDS commands, instead of image commands. Excessive use of shading may also slow downloading and printing an AFP resource. Obviously, the design should take precedence, and simple experiments 0030 A R TXT1 0031 A LIN01 1A 0032 A FONT(2305 (*POINTSIZE 30)) Appendix A. PSF/400 performance factors 285 may show that the shading or particular design is not having any noticeable effect on performance. A.7 Other factors These may or may not be of significance, depending on your particular printing configuration. A.7.1 PSF configuration object parameters These parameters apply to any printer that references the PSFCFG object in its device description. The ACKFRQ (Acknowledgement Frequency) parameter in the PSFCFG object is new with Version 4.0 Release 2.0. It specifies the frequency, in pages, with which PSF/400 sends IPDS acknowledgement requests to the printer. In return, the printer responds with information about the status of the print job (how many pages have been printed, for example). The parameter can be used with the AUTOSSNRCY (Automatic Session Recovery) parameter, also new with Version 4.0 Release 2.0. If a problem causes a print session to be disconnected and then re-established, PSF/400 may send duplicate pages to be reprinted because it did not have the current status of the printer. By increasing the ACKFRQ parameter, you can reduce the number of reprinted pages. However, too many acknowledgements slow down the communication between PSF/400 and the printer. This parameter should relate to the speed of the printer. If we imagine a printer rated at 100 ipm (impressions per minute), the default ACKFRQ setting of 100 pages will cause an acknowledgement to be transmitted every minute. You can, therefore, increase this parameter for faster printers and reduce it for slow desktop printers. If the number of pages in the job is less than the ACKFRQ value, an acknowledgement is sent at the end of the job in any case. But be aware of the increased likelihood of duplicate pages should the session to a printer with a high acknowledgement interval end abnormally. A.7.2 Printer file parameters These parameters require a change to the printer file used by your application. Unless you have special requirements, set the Spooled Output Schedule printer file parameter to *IMMED rather than *FILEEND so spooled file processing may begin without waiting for the producing job to complete (it may be closing files or performing other non-printing tasks). A.7.3 Printer settings These changes are made at the printer operator panel. • MTU Sizes: Many printers have an optimum Maximum Transmission Unit (MTU) size. The MTU is the maximum allowable length of data packets in bytes. This is usually documented in the setup guide for the printer. For example, the recommended size for an AFCCU printer using TCP/IP is 4096. This value should match the MTU size of other devices on the LAN. For an SNA-attached printer, the printer MTU should not exceed the value specified 286 IBM AS/400 Printing V in Maximum Frame Size in the APPC Controller description. In turn, this value should be equal to or less than the equivalent value in the Token-Ring line description. A common value for the SNA Token-Ring is 4060. • Printer Memory: Printers with particular requirements include those that support multiple data streams. Memory may be used to swap out resources and print commands while those of another data stream are loaded. For IPDS printing on the IBM Network Printer range, best performance with current microcode levels is seen with 16 MB to 20 MB memory, depending on the complexity of the output. PCL memory requirements for these printers, whether it be from the AS/400 system or a PC client, depends on additional factors such as the page size used and duplexing. These requirements are documented in the User's Guide for each model. For the IBM 3130, we recommend that you use at least 16 MB of extra memory for each additional data stream (PCL or PostScript) that is used. Extra memory may also benefit the throughput of IPDS-only jobs. • Early Print Complete: This option, or similar, is available on some twinaxially-attached IPDS printers including the IBM network printers. If enabled, the printer sends back a good acknowledgement to PSF/400 when it has received the data rather than when it has printed it. This improves performance at the risk of losing data (for example, through a paper jam). If you enable this feature, set the PRTERRMSG parameter in the device description to *INFO to ensure you are made aware of any conditions or interventions at the printer. We do not recommend that you enable this feature unless you always save copies of your spooled files. One of the keystones of the IPDS architecture is the two-way dialogue between host and printer and the improved error recovery it provides. You may find that third-party implementations of IPDS are, in fact, using a similar feature to Early Print Complete (that is, they send a good acknowledgement back to the host immediately on receiving the data). • IPDS Buffer Size: Also found only on twinaxially-attached printers, this should be set to 1024 bytes rather than 256. © Copyright IBM Corp. 2000 287 Appendix B. Data Description Specifications (DDS) formatting DDS formatting within the printer file is the standard OS/400 interface to printed output in the same manner that DDS is the interface for external database files. DDS can be used for SCS, IPDS, and AFP output. With host print transform, this can be extended to ASCII formats). Printer file DDS contains support for all the elements in a standard document including overlays, images, graphics, barcoding, lines, boxes, and fonts. Printer file DDS is covered in detail in OS/400 Printer Device Programming V4R2, SC41-5713. This appendix provides a couple of examples to illustrate how documents can be formatted with DDS. The quality of the illustrations in this appendix is not representative of the high quality output that can be produced on the AS/400 system, but is a function of the processes used to produce this publication. B.1 DDS functionality example Figure 224 on page 288 shows a sample application that provides a comprehensive example of DDS output formatting. The DDS source used for this sample application is shown in Figure 225 on page 289 and Figure 226 on page 290. 288 IBM AS/400 Printing V Figure 224. DDS functionality example Appendix B. Data Description Specifications (DDS) formatting 289 Figure 225. DDS source for DDS functionality example (Part 1 of 2) A* DDS Functionality Printer File Specifications (1 of 2) A* A R HEADR1 A PAGRTT(0) A DRAWER(1) A* Print "DDS Functionality" in Helvetica Bold 20-point outline font A LIN01 35A A FNTCHRSET(CZH400 + A (*POINTSIZE 20) T1V10037) A POSITION(0.7 3.0) COLOR(RED) A* Print "OS/400 V3R1 . . ." in Helvetica 12-point bitmapped font A* w/dynamic positioning A LIN02 35A A FNTCHRSET(C0H200B0 T1V10037) A POSITION(&VALDWN &VALACR) A COLOR(PNK) A VALDWN 5S 3P A VALACR 5S 3P A* Print variety of lines w/ fixed attributes A R LINE1 A LINE(1.3 2.6 0.2 *VRT *NARROW) A LINE(1.1 2.8 0.4 *VRT *MEDIUM) A LINE(0.9 3.0 0.6 *VRT *WIDE) A* Print dynamic lines (position and attributes from program) A R LINE2 A LINE(&LD &LA &LL *HRZ &LW) A LD 5S 3P A LA 5S 3P A LL 5S 3P A LW 5S 3P A* Print fixed box A R BOX1 A BOX(0.8 1.0 1.5 2.0 .1) A* Print dynamic box (position and box attributes) A R BOX2 A BOX(&BULD &BULA &BLRD &BLRA &BWTH) A BULD 5S 3P A BULA 5S 3P A BLRD 5S 3P A BLRA 5S 3P A BWTH 5S 3P A* Print LIN08 - "Multiple Overlays per page" in default font A* Print LIN09 - "Multiple Page Segments per page" in default font A* Print "Dynamic Positioning" in printer-resident font 2311 A R TXT0 A LIN08 35A 36 27 A LIN09 35A 50 31 A 51 33 'Dynamic Positioning for OVL & PSG' A FONT(2311 (*POINTSIZE 12)) A* Print LIN03 - "Vertical/Horizontal" in printer-resident font 18 A* Print LIN05 - "L" in GDDM scalable font A* Print LIN06 - "LARGE CHARACTERS" in GDDM scalable font A* Print LIN07 - "Add Points Addressability" in font 46 (Courier) A R TXT1 A LIN03 35A POSITION(1.3 3.3) A FONT(18) COLOR(BRN) A LIN04 35A COLOR(YLW) FONT(19) A POSITION(3.1 2.4) A LIN05 1A FONT(ADMWMOB) A POSITION(2.9 1.0) A CHRSIZ(9.0 20.0) A LIN06 15A FONT(ADMWMOB) A POSITION(3.4 1.3) A CHRSIZ(6.0 6.0) A LIN07 35A FONT(46) A POSITION(4.7 1.7) 290 IBM AS/400 Printing V Figure 226. DDS source for DDS functionality example (Part 2 of 2) Looking at both the printed sample of “DDS Functionality” and the DDS source, let's review the specifications in detail: • DDS Functionality (LIN01): Printed in a 20-point Helvetica Roman-Bold font 0.7 inches down and 3.0 inches across. The FRONTMGN parameter of the printer file is set at 0 so the down/across positions are from the top/left edge of the page. Note: The POSITION keyword specifies the baseline or bottom left point of the first character to print. The font is specified using FNTCHRSET, which defines the character set and code page to use. In the C0H400J0 font character set resource, “C0” means it is a character set, “H400” means Helvetica Roman-Bold, and “J” means 20-point. This is a typographic font, part of the AFP Font Collection. For 300-pel printers, C0H400J0 is normally found in library QFNT300LA1. Code A* DDS Functionality Printer File Specifications (2 of 2) A* A* A* A* A* Print "Rotate" in four orientations A R TXT2 A TXT1@1 6 COLOR(TRQ) A POSITION(2.7 6.4) A TXT1@2 6 TXTRTT(90) COLOR(RED) A POSITION(2.7 6.4) A TXT1@3 6 TXTRTT(180) COLOR(BLU) A POSITION(2.7 6.4) A TXT1@4 6 TXTRTT(270) COLOR(GRN) A POSITION(2.7 6.4) A* Print Interleaved 2 of 5 bar code vertically A* Print Code 3 of 9 bar code horizontally A R BAR1 A BAR1@1 8S BARCODE(INTERL2OF5 3 *VRT) A POSITION(2.0 1.8) A BAR2@1 8 BARCODE(CODE3OF9 3) A POSITION(2.0 2.5) A* Print text in outline (or scalable) fonts A R FNT1 A CHR1 1 POSITION(5.7 2.0) COLOR(RED) A FONT(2305 (*POINTSIZE 30)) CHRID A LTR1 1 POSITION(5.7 2.85) COLOR(RED) A FONT( 420 (*POINTSIZE 13)) A LTR2 1 POSITION(5.7 3.0) COLOR(BLU) A FONT(2310 (*POINTSIZE 45)) A LTR3 1 POSITION(5.7 3.4) COLOR(PNK) A FONT(2305 (*POINTSIZE 80)) A LTR4 1 POSITION(5.7 4.3) COLOR(GRN) A FONT(20224 (*POINTSIZE 55)) A LTR5 1 POSITION(5.7 4.8) A FONT(2307 (*POINTSIZE 20)) A CHR2 1 POSITION(5.35 5.45) COLOR(RED) A FONT(2305 (*POINTSIZE 110)) CHRID A* Print images (page segments) w/ variable names and positioning A R PSG1 A PAGSEG(&PSGNAM &PSGDWN &PSGACR) A PSGNAM 8A P A PSGDWN 5S 3P A PSGACR 5S 3P A* Print Overlays One-Two-Three in fixed and dynamic form A R OVL1 A ENDPAGE A OVERLAY(*LIBL/DDSOVL1 6.0 1.3) A OVERLAY(&OVLNM2 6.9 2.5) A OVERLAY(DDSOVL3 &OV3DWN &OV3ACR) A PAGSEG(BUSPART 7.20 1.9) A OVLNM2 8A P A OV3DWN 5S 3P A OV3ACR 5S 3P Appendix B. Data Description Specifications (DDS) formatting 291 page T1V10037 is the USA/Canada code page and is normally located in library QFNTCPL. • OS/400 V3R2 and Later Releases (LIN02): Prints field in Helvetica Roman-Medium 12-point 0.9 inches down and 3.3 inches across. The FNTCHRSET value is CZH200, which is an example of the new (V4R2) outline font support. An outline font is one vector-based object that can be scaled to any desired point size. A new parameter (POINTSIZE) supplies the 12-point sizing for this text. Dynamic positioning is used, where the program variables LINDWN and LINACR are loaded with the down/across values and referenced in the DDS as program-to-system fields. • Vertical/Horizontal lines and boxes (LIN03): Prints in Courier Italic starting 1.3 inches down and 3.3 inches across. The keyword FONT(18) specifies printer-resident Courier Italic. • Bar Code Symbologies (LIN04): Prints in printer-resident font 19, which is OCR-A. • Large Characters (LIN05): The “L” is printed in the Open Block font scaled by the CHRSIZ keyword to 9.0 width and 20.0 height. ADMWMOB is the Open Block font, one of the GDDM scalable fonts, and is located in the QGDDM library. The balance of the text also prints in Open Block, but is scaled to 6.0 wide and 6.0 high. • All Points Addressability: Prints in printer-resident Courier Bold, which is FONT(46). • Multiple Overlays per Page (LIN08): Prints in the printer-resident Courier (font 11), which is the default font. In this case, it is specified as font identifier 011 in the printer device description. • Multiple Page Segments per Page: Also prints in the default font. • Dynamic Positioning for OVL and PSF: Prints in printer-resident font 85, which is Prestige Elite. This is a printer-resident outline font with the POINTSIZE parameter defining the size. • Rotate: Prints the text “Rotate” in the four different rotations; 0, 90, 180, and 270. Note how the POSITION (2.7 inches down and 6.4 inches across) defines a baseline starting point for each rotation. • Lines (Record formats LINE1 and LINE2): Three vertical and three horizontal lines are printed. The first vertical line begins at a point 1.3 inches down and 2.6 inches across and has a length of 0.2 inches. The line width is *NARROW, which means 0.008 inches wide. All five parameters of the LINE keyword can be program-to-system variables, enabling the application to dynamically “draw” lines. LINE2 illustrates a dynamic line with all five variables passed from the application. • Boxes (Records formats BOX1 and BOX2): Two boxes are drawn in the DDSFUN3 example. The first (or thicker) box is defined by the top left (0.8 down, 1.0 across) and bottom right (1.5 down, 2.0 across) positions. The box width is 0.1 inch. Box width can also be specified by the *NARROW, *MEDIUM, and *WIDE special values. All five parameters of the BOX keyword can be program-to-system variables, which enables the application to dynamically “draw” boxes. BOX2 depicts an example of a fully dynamic box. 292 IBM AS/400 Printing V • Text in Record Format FNT1: This record format prints text in a number of printer-resident outline fonts. Font 2305 is Helvetica Italic. Font 420 is Courier Bold. Font 2310 is Times New Roman Italic. Font 20224 is boldface. • Page Segments: The IBM logo is dynamically placed using program to system variables for page segment name, down position, and across position. Unlike text, this position marks the top left point of the page segment image (top left when printed in standard orientation or with 0 rotation). Note: The strawberry image, a page segment called STRWNB is not explicitly placed by DDS. It is part of overlay three. • Overlays: Three simple overlays are shown in the DDSFUN3 example. “Overlay One” is an AS/400 overlay object (*OVL) called DDSOVL1. It is placed 6.0 down and 1.3 across. This is, again, relative to the page margins and marks the top left point of the overlay. “Overlay Two” is dynamically referenced from the program by the variable OVLNM2. “Overlay Three” is dynamically positioned from the program by the variables OV3DWN and OV3ACR for down and across. • Barcoding: Two examples of a barcode are specified. The field BAR1@1 is printed vertically in the Interleaved 2 of 5 barcode, starting at 2.0 down and 1.8 across. The barcode is printed with a height value of 3, which prints a 1/2-inch high barcode. Interleaved 2 of 5 is a numeric-only barcode. The human readable field value (012345678) is printed below the barcode, along with the check digit (4). The field BAR2@1 is printed horizontally in the Code 3 of 9 barcode starting at 2.0 down and 2.5 across. It prints horizontally because *HRZ is the default. The human readable (01020304) field value is also the default. Note that Code 3 of 9 is an alphameric barcode (up to 50 characters) and does not include a check digit. B.2 Super Sun Seeds invoicing example Applying the previous example, we can develop a more relevant application example—the Super Sun Seeds invoice. This application (program INVNEW1) produces a tailored, multi-page invoice. Individual pages are built based on the number of customer transactions. Page components include invoice heading information, item detail information, and invoice totals. The totals also include a payment coupon. In addition, there is a variable marketing offer with a customized image placed on the last page of some invoices. Figure 227 through Figure 229 on page 295 show how three of the invoice pages turn out. Appendix B. Data Description Specifications (DDS) formatting 293 Figure 227. Improved Printing Corp example 294 IBM AS/400 Printing V Figure 228. Organic Garden Supplies example (Part 1 of 2) Appendix B. Data Description Specifications (DDS) formatting 295 Figure 229. Organic Garden Supplies example (Part 2 of 2) The first page (Figure 227 on page 293) is for a customer with less than 16 transactions so the entire invoice can fit on one page—invoice heading, item detail, marketing offer, totals, and payment coupon. The next two pages (Figure 228 and Figure 229) illustrate a customer whose invoice overflows to two pages. Here the format of page one has been changed to show only invoice heading and item information. Page two is moved up, with abbreviated heading information followed by the balance of the transactions, the marketing offer, the invoice totals, and the payment coupon. 296 IBM AS/400 Printing V For a customer invoice requiring more than two pages, an additional type of page is added. This is a “middle” page that contains the abbreviated invoice header and the item transactions. This application demonstrates the integration of DDS formatting with the application program and the ability to compose pages intelligently. In this example, many of the differences between pages are produced by selecting different overlays. Figure 230 through Figure 235 on page 301 show several of the different overlays used to create different page types. Figure 230. Overlay for a single page invoice (INVALL) Appendix B. Data Description Specifications (DDS) formatting 297 Figure 231. Overlay for the first page of a multi-page invoice (INVFST) 298 IBM AS/400 Printing V Figure 232. Overlay for the middle page of a multi-page invoice (INVMID) Appendix B. Data Description Specifications (DDS) formatting 299 Figure 233. Overlay for the last page of a multi-page invoice (INVLST) The DDS source that produced this invoicing application (INVNEW1) is shown in Figure 234 on page 300 and Figure 235 on page 301. 300 IBM AS/400 Printing V Figure 234. DDS source for the invoicing application (Part 1 of 2) A* INVNEW1 - Printer File DDS for Super Sun Seeds Invoice A* Example 1 (part 1 of 2) A* A* A* Page 1 Top of Invoice A*- include Name and Address and Invoice Heading information A* A R INVTOP SKIPB(10) A ZIPPN 9S 12 BARCODE(POSTNET) A SPACEA(2) A NAME 25A 12 A STNAME 25A 48 A SPACEA(1) A STREET 25A 12 A STSTRT 25A 48 A SPACEA(1) A CITY 25A 12 A STCITY 25A 48 A SPACEA(1) A STATE 2A 12 A ZIP 9S 16 EDTWRD(' - ') A STSTE 2A 48 A STZIP 9S 52 EDTWRD(' - ') A SPACEA(3) A CUST# 6S 0 14 EDTCDE(Z) A INVC# 6S 0 32 EDTCDE(Z) A 49DATE EDTCDE(Y) A PAYDAT 6S 0 66EDTCDE(Y) A SPACEA(2) A SHPVIA 10A 14 A 34DATE EDTCDE(Y) A TERMS 10A 47 A SLSMAN 16A 64 A SPACEA(4) A* A* Page 2 Abbreviated Header A* A R INVTP2 SKIPB(10) A NAME 25A 12 A SPACEA(2) A CUST# 6S 0 14 EDTCDE(Z) A INVC# 6S 0 32 EDTCDE(Z) A 49DATE EDTCDE(Y) A PAYDAT 6S 0 66EDTCDE(Y) A SPACEA(4) A* A* Detail Lines A* A R DETLIN A QTY 4S 0 8 EDTCDE(Z) A UOM 2A 13 A ITEM# 8S 0 18 A ITMDES 25A 28 A SELPRC 6S 2 58 EDTCDE(J) A EXTPRC 7S 2 70 EDTCDE(J) A SPACEA(1) A* A* Multiple Page Message A* - Text is in Helvetica 11-point (C0H200A0) raster font, or A* - Text is in Helvetica 11-point (CZH200) outline font A R PAGEOF A PAGCON 4A POSITION(10.7 7.3) A FNTCHRSET(C0H200A0 T1V10037) A PAGCNT 2S 0 POSITION(10.7 7.8) A FNTCHRSET(CZH200 + A* (*POINTSIZE 11) T1V10037) A EDTCDE(Z) Appendix B. Data Description Specifications (DDS) formatting 301 Figure 235. DDS source for the invoicing application (Part 2 of 2) Seven record formats are used in this DDS source: • INVTOP: Full invoice heading information • INVTP2: Abbreviated invoice heading information • DETLIN: Transaction detail lines • INVBOT: Invoice bottom (totals and payment coupon) • OFFER: Marketing offer • PAGSEG: Print variable page segment (image). Segment name passed from the program. A* INVNEW1 - Printer File DDS for Super Sun Seeds Invoice A* Example 1 (part 2 of 2) A* Invoice Totals A* - includes Interleaf 2 of 5 barcode A* A R INVBOT SKIPB(51) A TOTDUE 9S 2 67 EDTWRD(' , , $0. -') A SPACEA(4) A PAYDA@ 6S 0 25 EDTCDE(Y) A TOTD@2 9S 2 67 EDTWRD(' , , $0. -') A SPACEA(2) A NAME@2 25A 12 A SPACEA(1) A STRE@2 25A 12 A BARPRC 15S 0 52BARCODE(INTERL2OF5 3) A SPACEA(1) A CITY@2 25A 12 A SPACEA(1) A STAT@2 2A 12 A* ZIP@2 9A 16 A ZIP@2 9S 16 EDTWRD(' - ') A* A* Offer Print A* - Font 92 is Courier Italic 12-pitch (printer-resident) A* A R OFFER SKIPB(43) A FONT(92) A OFFR@1 24A 36 A SPACEA(1) A OFFR@2 24A 36 A SPACEA(1) A OFFR@3 24A 36 A SPACEA(1) A OFFR@4 24A 36 A SPACEA(1) A OFFR@5 24A 36 A SPACEA(1) A OFFR@6 24A 36 A SPACEA(1) A* A* Images/Page Segments A* - Dynamic page segment name passed from program A* A R PAGSEG PAGSEG(&PSEG 7.0 2.6) A PSEG 8A P A* A* A* Images/Page Segments A* - variable overlay name from program A* A R PRTOVL OVERLAY(&OVRLAY 0 0) A OVRLAY 8A P A* A* Endpage forces page advance A* A R ENDPAGE ENDPAGE 302 IBM AS/400 Printing V • PRTOVL: Print variable overlay. The following overlays are used: – INVALL: One page invoice – INVFST: First page of multi-page invoice – INVMID: Middle page of a multi-page invoice – INVLST: Last page of multi-page invoice This invoicing example (INVNEW1) produces an effective business document, making use of electronic forms, barcoding, custom images, and tailored marketing messages. Because the entire document is electronic, it is easily changed. There are a number of enhancements that can be made to the application to further enhance its value. A fixed overlay can be printed on the back side of selected pages. In the case of invoicing, this might be a page containing the terms and conditions of the invoice. This is called a constant back overlay. Additional electronic copies can be automatically produced and printed in collated sequence. In this example, you might have a customer invoice, a packing list, and a file copy. Information on each copy can be tailored. For example, pricing information can be suppressed on the packing list. Since all DDS document keywords provide for dynamic control, a completely dynamic or “floating” invoice could be produced. In this case, the document is precisely tailored for each customer. For example, if a given customer has 15 transactions, the invoice is designed for exactly 15 transactions. There are two additional application examples (INVNEW2 and INVNEW3) that implement the preceding enhancements. INVNEW2 implements the copies, price suppression, and constant back overlay. INVNEW3 adds the dynamic (or floating) invoice format. The DDS source for these examples and a comprehensive library of AFP application examples can be found in the AS/400 AFP Programming Sampler at: http://www.printers.ibm.com/as400 © Copyright IBM Corp. 2000 303 Appendix C. Print openness Various combinations of new and enhanced application program interfaces (APIs), new printer file parameters, new printer device description parameters, new output queue parameters, and new printer writer parameters were added in V3R7, and can be used to provide increased print functionality. Print openness enables IBM or third parties to provide support for: • Data stream transforms (to PCL, to PostScript) • Better identification of supported personal print data streams • Third-party attributes on printer file • Third-party attributes on printer device description • Third-party printer attachment • TCP/IP LAN attached printers • HP JetDirect LAN protocol printers Figure 236 shows how the driver and data transform programs provided by the user interface with the open writer and other APIs provided by the system. Figure 236. Interface to user driver and data transform programs The user driver program or any other user application that processes spooled files can find information on how to process a spooled file using attributes such as user-defined options, user-defined data, and user-defined objects. These attributes are associated with output queues, printer devices descriptions, and spooled files. OR OR APIs to interface with open writer Data transform program provided by the system Interface for user data transform program APIs to manipulate user space APIs to retrieve Device description APIs for other devices APIs that interface with printer device APIs to manipulate spooled files STRPRTWTR Command Device APIs QUSRSPLA QSPOPNSP QSPGETSP QSPCLOSP QOLELINK QOLDLINK QOLRECV QOLSEND QOLSETF Device APIs Open Writer ESPDRVXT User Driver QUSCRTUS QUSCHGUS QUSPRTUS QSPEXTWI QSPSETWI QSPSNDWM ESPDRVXT ESPDRVXT User data transform program 304 IBM AS/400 Printing V C.1 Additional functions provided on the printer file Additional functions provided on the printer file include new parameters on the following commands: • CRTPRTF: Create Printer File • CHGPRTF: Change Printer File • OVRPRTF: Override Printer File Note: All the parameters added are valid only with SPOOL(*YES). The new parameters are: • USRDFNOPT: User-defined options that can be used by user applications or user-specified programs that process spooled files. The maximum number of options is four, and the default for the parameter is *NONE. The user can enter any character. • USRDFNDTA: User-defined data that can be used by user applications or user-specified programs that process spooled files. The user can enter any character up to 255 characters. The default for the parameters is *NONE. • USRDFNOBJ: User-defined object that can be used by user applications or user-specified programs that process spooled files. The parameter is made up of the qualified object name and the object type. The object name meets the AS/400 object naming convention. The possible choices for object types are: *DTAARA, *DTAG, *FILE, *USRIDX, *USRQ, *USRSPC, and *PSFCFG. The single default for the parameter is *NONE. In addition, the following commands and APIs are enhanced: • The Display File Description (DSPFD) command is enhanced to display the new parameters added to the printer file. • The Display Override (DSPOVR) command is enhanced to display the new parameters added on the OVRPRTF command. • The Work with Spooled File Attributes (WRKSPLFA) command is enhanced to display the new parameters added to the printer file. • The Change Spooled File Attributes (CHGSPLFA) command is enhanced to support the parameters added to the printer file. • The Retrieve Spooled File Attributes (QUSRSPLA) API is enhanced to support the new printer file level of functions as new attributes. • The Create Spooled File (QSPCRTSP) API is enhanced to support the new printer file level of functions as new attributes. C.2 Additional functions provided on the PRTDEVD commands Additional functions are provided on the printer device description commands: • CRTDEVPRT: Create Device Description Printer • CHGDEVPRT: Change Device Description Printer • DSPDEVD: Display Device Description Appendix C. Print openness 305 The new parameters are: • USRDFNOPT: User-defined options that can be used by user applications or user-specified programs that process spooled files. The maximum number of options is four, and the default for the parameter is *NONE. The user can enter any character. • USRDFNOBJ: User-defined object that can be used by user applications or user-specified programs that process spooled files. The parameter is made up of the qualified object name and the object type. The object name meets the AS/400 object naming convention. The possible choices for object type are *DTAARA, *DTAG, *FILE, *USRIDX, *USRQ, *USRSPC, and *PSFCFG. The single default for the parameter is *NONE. • USRDTATFM: User-specified program to transform the spooled file data before it is processed by the driver program. The default value for the parameter is *NONE. • USRDRVPGM: User-specified driver program to process the spooled file. The default value for the parameter is *NONE. • RMTLOCNAME: Specifies the remote location name of printer device. This value may be an SNA network ID and control point name, an Internet protocol (IP) host name, or an Internet address. • LANATTACH: Specifies the driver type that is used to attach the printer to the network. The possible values are: – *LEXLINK: LexLink attachment – *IP: TCP/IP attachment – *USRDFN: User-defined attachment C.3 Additional functions provided on the output queue commands Additional functions are provided on the output queue commands: • CRTOUTQ: Create Output Queue • CHGOUTQ: Change Output Queue The added parameters are: • USRDFNOPT: User-defined options that can be used by user applications or user-specified programs that process spooled files. The maximum number of options is four, and the default for the parameter is *NONE. The user can enter any character. • USRDFNOBJ: User-defined object that can be used by user applications or user-specified programs that process spooled files. The parameter is made up of the qualified object name and the object type. The object name meets the AS/400 object naming convention. The possible choices for object type are *DTAARA, *DTAG, *FILE, *USRIDX, *USRQ, *USRSPC, and *PSFCFG. The single default for the parameter is *NONE. • USRDTATFM: User-specified program to transform the spooled file data before it is processed by the driver program. The default value for the parameter is *NONE. Note: In V4R2, a sample transform exit program that supports page range processing when using a remote output queue (LPR) is shipped in the QUSRTOOL library. The tool is called TSPRWPR. 306 IBM AS/400 Printing V • USRDRVPGM: User-specified driver program to process the spooled file. The default value for the parameter is *NONE. In addition, the following parameters and commands are enhanced: • New values in the DESTTYPE (Destination Type) parameter and the CNNTYPE (Connection Type) parameter to support Host-to-LAN printing with the Integrated PC Server NetWare. • New parameter SEPPAGE (Separator Page) specifies whether to request a separator page when the connection type is *IP or *USRDFN. • The WRKOUTQD command is enhanced to display the new and changed output queue attributes. C.4 Additional functions Other functions that are provided include: • Two new APIs are added: Change Output Queue (QSPCHGOQ) and Change Configuration Description (QDCCCFGD). The first one can be used to change some attributes of an output queue, and the other one can be used to change some of the attributes of the device description. Also both can change a new attribute called User Defined Data. This parameter can be extracted by a driver program using either the QSPROUTQ (Retrieve Output Queue Information) API or the QSPRDEVD (Retrieve Device Description Information) API. The maximum length of the user-defined data is 5000 and the default for the attribute is *NONE. • The User Data Transform (USRDTATFM) parameter is added to the Send TCP Spooled File (SNDTCPSPLF) command. The user can specify the name of a transform program to use instead of the host print transform. • The Separator Page (SEPPAGE) parameter is added to the Send TCP Spooled File (SNDTCPSPLF) command that allows the user the option to print a banner page or not. • The Start Print Writer (STRPRTWTR) command includes a new parameter called INIT. It allows the user to specify whether to initialize the printer device. • The new DDS keyword Data Stream Command (DTASTMCMD) is added that allows users to store information in the data stream of the spooled file. The information is enclosed within an AFPDS NOOP command. This keyword is valid with AFPDS spooled files only. C.5 Print openness: New APIs The following APIs are added mainly to assist driver programs processing spooled files: • QSPEXTWI (Extract writer status): Can be used by a print driver exit program to extract information about the writer and about the spooled file the writer is processing. • QSPSETWI (Set writer status): Can be used by a print driver exit program to set information related to the spooled files the writer is processing. Appendix C. Print openness 307 • QSPSNDWM (Send writer message): Can be used by a print driver exit program to send informational and inquiry messages to the writer's message queue. • ESPDRVXT (Print driver exit): Defines how a user-defined print driver exit program must be written to be used with the AS/400 print writer program. • ESPTRNXT (Writer transform exit): Defines the interface between a user-defined transform program and the AS/400 print writer program. • QWPZHPTR (Host print transform API): Host print transform API to access the SCS to ASCII transform or the AFPDS to ASCII transform. • QSPBSEPP (Build separator page): Builds the system separator page to be printed for the spooled file. • QSPBOPNC (Build open time commands): Builds “open time” commands for the spooled file. The “open time” commands contain most of the file level commands needed to format the printed output. • QGSLRSC (List spooled file AFPDS resources): Generates a list of the AFPDS resources found in the specified spooled file and returns the list in a user space. • QGSCPYRS (Copy AFPDS resources): Puts AFPDS data stream equivalent of the specified AFPDS resource into the specified user space. For detailed information on APIs, see AS/400 System API Reference, SC41-5801. 308 IBM AS/400 Printing V © Copyright IBM Corp. 2000 309 Appendix D. Network Station printing The IBM Network Station has both a parallel port and a serial port, either of which can be used to print to an attached printer. The ports appear to the internal operating system as TCP/IP sockets, to and from which bytes may be read and written. This is the mechanism that makes printing to a printer attached to the IBM Network Station parallel or serial port possible. In addition, use the IBM Network Station Manager program (through the browser) to ensure that the “Parallel printer port” setting is “On” (the default) to enable printing support on the IBM Network Station. D.1 Printing from OS/400 Each IBM Network Station can have a printer attached to either its parallel or serial port. The printer must also be supported by the OS/400 host print transform. Any AS/400 user in the network can print AS/400 output to the printers attached to the IBM network stations. D.1.1 AS/400 Network Station printer driver Printers attached to IBM Network Stations are supported through the standard printing subsystem through host print transform. You can use a wide variety of different models from different manufacturers. Also, all printing functions are supported such as: • Printing page ranges • Printing a separator page • Limited printer status reporting AS/400 Network Station print driver operation Since the IBM Network Station is attached to a LAN, its printer can be shared between several hosts. This is made possible by the way the printer writer operates. The operation of the printer writer serving an IBM Network Station attached printer is slightly different than that of other printer writers. When this printer writer is started, it establishes a session to the IBM Network Station and checks the availability of the printer. If the session cannot be established within the activation timer value, a message is sent to the operator. If there are spooled files on the output queue, the writer sends them to that IBM Network Station's printer. If there are no more spooled files on the output queue, the printer closes the session with the IBM Network Station printer after the inactivity timer expires. Closing this session allows other servers to print on the IBM Network Station printer. In addition, if you end the printer writer, it also closes the session with the printer. If new files become ready on the output queue, the writer tries to establish a new session with the IBM Network Station printer. D.1.2 Creating printer device descriptions You must create a device description for each printer attached to an IBM Network Station. You can either use the IBM Network Station Setup Assistant (STRNSSA) Task 4300, or you can create the necessary printer device descriptions manually. 310 IBM AS/400 Printing V If you choose to create printer device descriptions with the CRTDEVPRT command, the following values must be used: • Device class: Choose *LAN. • Device type: Choose 3812. • Device model: Choose 1. • LAN attachment: Choose *IP. This indicates that the printer is using TCP/IP communications. • Port number: Choose 6464 for a parallel port attached printer and 87 for a serial port attached printer. Note: A serial port attached printer should have its serial interface set to the following values: – Baud rate: 9600 bps – Data bits: 8 bits – Parity: none – Stop bit: 1 – Handshaking: DTR/DSR • Activation timer: This value specifies the amount of time (in seconds) to wait for the device to respond to an activation request. If a response is not received within this time, message CPA337B is returned. This message asks the operator if the request should be retried or canceled. Choose any value that is suitable for your environment. Note: If you use Task 4300 of the IBM Network Station Setup Assistant, this value defaults to 500 seconds. • Inactivity timer: Choose *ATTACH. This value varies by the value on the physical attachment (ATTACH parameter) and certain values on the device class (DEVCLS) and application type (APPTYPE) parameters. For DEVCLS(*SNPT) or APPTYPE(*DEVINIT) support, *ATTACH maps to *NOMAX. For DEVCLS(*LAN), *ATTACH maps to *SEC15. For APPTYPE(*NRF) and APPTYPE(*APPINIT) support, *ATTACH maps to 1 minute. You may specify an interval between 1 minute and 30 minutes or the special values *SEC15, *SEC30, or *NOMAX. The IBM Network Station handles only one activation request at a time from any host. The Inactivity Timer parameter allows sharing the printer device among several hosts. After the time you specified has elapsed, the writer job releases the device if there are no more spooled files to print. If you specify *NOMAX for the Inactivity Timer parameter, the writer keeps the connection to printer active until you stop the printer writer. Therefore, using *NOMAX effectively prevents sharing the printer. Note: If you use Task 4300 of the IBM Network Station Setup Assistant, this value defaults to 1 minute. • Host print transform: Choose *YES. This is required to transform AS/400 EBCDIC data to ASCII data. • Manufacturer type and model: Type in the value that reflects the printer to be configured. To determine that value, you can press the Help key to view the list of supported printers. Appendix D. Network Station printing 311 • Remote location name: Specify the IP address or the name of the IBM Network Station to which the printer is attached. Note: If you want to specify the name, you must first create an entry in the TCP/IP Host Table. • System driver program: Specifies the printer driver type to be used for this configuration. For IBM Network Station attached printers, this value must be *NETSTNDRV. D.2 Local printing This section outlines aspects of local printing. D.2.1 5250 screen copy to a local printer If you click the Print pull-down option in the 5250 emulator, you can select local or host print. If you click Local, the contents of the 5250 session window can be printed on the IBM Network Station directly-attached printer. If you click Host, the AS/400 system print function is invoked, and you see the message “Print operation complete to the default printer device file”. D.2.2 Printing from Java Java is the only language in which IBM Network Station applications can be written. Release 2.5+ of the IBM Network Station software includes an implementation of Sun's 1.1 JVM, which includes the ability to print with Java applications. Note: All printing through the JVM generates PostScript output. Page Layout is the responsibility of the Java application. Untrusted applets are not allowed to create print jobs. An overview for developers, written by Sun, can be found at: http://java.sun.com/products/jdk/1.1/docs/guide/awt/designspec/printing.html As part of this support, it is possible to send Java application output to the AS/400 system through LPR/LPD. Typically, you have a print dialog that allows you to specify a print destination of PARALLEL1, SERIAL1, or a remote print destination in the form of QueueName@ServerName. The first two values direct output to a locally attached printer, while the third value causes output to be sent to a remote system (which can be an AS/400 system) through LPR/LPD. As previously noted, the output is generated in PostScript. Therefore, you need to make sure that the printer the AS/400 system ultimately routes the spooled data to is capable of printing PostScript. 312 IBM AS/400 Printing V © Copyright IBM Corp. 2000 313 Appendix E. Printer summary This appendix provides a summary of AS/400 system-supported printers, including IBM production printers, IBM industrial printers, and IBM workgroup printers. Table 24. IBM production printers for the AS/400 system IBM AS/400 printer Max speed Technology Resolution Attachment Data stream Features Infoprint 60 60 ipm Laser 600 x 600 (240 and 300 dpi input accepted) IP Token Ring IP Ethernet SNA Token Ring IPDS PCL High speed/capacity AFCCU control unit Up to 4 input bins 750,000 imp/month Cut sheet/duplex Multi-function finisher includes stapling, folding, saddle stitching, insertion Infoprint 62 62 ipm Non Contact Flash Fusing Laser 240 x 240 300 x 300 IP Token Ring IP Ethernet SNA Token Ring IPDS Continuous form AFCCU control unit Wide forms (to 14-1/2") Power Stacker option Infoprint 70 70 ipm Laser 600 x 600 IPToken-Ring IP Ethernet IPDS PostScript/PCL (supported via a print server) Homerun Control Unit High capacity input Finishing, including stapling 400k impressions/month Infoprint 2000 110 ipm Laser 600 x 600 SNA Token Ring, IP Token Ring IP Ethernet PostScript 3 PCL IPDS Note: PCL and PostScript 3 support through print server transforms. High speed, high volume, high fidelity Up to 2.0million imp/month Cut sheet Infoprint 3000 Up to 334 ipm Laser 600 x 600, SNA Token Ring, IP Token Ring IP Ethernet IPDS High speed, high volume, 18" print width = 2-up Simplex, duplex, Intelligent Post-Processing, Up to 17.4 million imp/mo Continuous form Infoprint 4000 Up to 1002 ipm Laser 240 x 240, 300 x 300, 480 x 480, 600 x 600, SNA Token Ring IP Token Ring IP Ethernet IPDS High speed, high volume, Resolution to 600 dpi 18" print web = 2-up Simplex, duplex, Intelligent Post-Processing, Up to 17.4 million imp/mo Continuous form Infoprint 4000 HiLite Color Up to 1002 ipm Highlight Color Laser 240 dpi 300 dpi Attaches to Infoprint 4000 and IBM 3900 IPDS High speed, high volume color Continuous Form 314 IBM AS/400 Printing V Table 25. IBM industrial printers for the AS/400 system Table 26. IBM workgroup printers for the AS/400 system IBM AS/400 printer Speed Technology Resolution Attachment Data stream Features 4230 375 cps - 600 cps Dot Matrix Varies by print quality mode Twinax, Serial/Parallel IPDS LAN (7913) ASCII LAN (NPS) IPDS SCS ProPrinter Heavy Duty IPDS graphics, barcode Easy to use Very Quiet (53 dBA) 4232 600 cps Dot Matrix Varies by print quality mode Serial or Parallel ASCII LAN (NPS) ProPrinter or 4224-3XX Heavy duty Easy to use Very quiet (53 dBA) 4247 700 cps Dot Matrix Varies by print quality mode Twinax Serial/Parallel IPDS LAN (7913) ASCII LAN (7913) IPDS SCS ProPrinter or Epson Up to 6 inputs 2 continuous forms Up to 8-part forms Quiet (55 dBA) 6400 Cabinet 500 lpm 1000 lpm 1500 lpm Line Matrix Varies by print quality mode Twinax, Serial/Parallel, IP Ethernet IPDS ASCII Ethernet IPDS ProPrinter, Printronics Epson SCS Code V, IGP Heavy Duty Very Quiet (52 dBA) Low cost of operation Web-controlled Op panel NPM support 6400 Pedestal 500 lpm 1000 lpm Line Matrix Varies by print quality mode Twinax, Serial/Parallel, IP Ethernet IPDS IP Ethernet ASCII IPDS ProPrinter, Printronics Epson SCS Code V, IGP Heavy Duty Low cost of operation Web-controlled Op panel NPM support 4400 Thermal Label Printer 6-10 Inches Per Second Thermal 300 dpi 203 dpi Twinax Serial/Parallel IP Ethernet IPDS IP Ethernet ASCII IPDS ProPrinter Printronics Epson SCS Code V, IGP 4, 6, 8 inch width models Heavy-duty Industrial Design Remote Web Management Barcode verifier Cutter IBM AS/400 printer Speed Technology Resolution Attachment Data stream Features Infoprint Color 8 (4308) 8 ppm Full Color Laser 600 x 600 Serial/Parallel, Ethernet, Token Ring PCL5e PostScript 3 35,000 imp/month AS/400 Support via Host Print Transform Network Printer 12 (4312) 12 ppm Laser 300 x 300 600 x 600 Twinax, Serial/Parallel, Ethernet (10/100), Token Ring IPDS SCS PCL5e PostScript IBM Integrated AFP/IPDS 35,000 imp/month Edge to Edge Printing Infoprint 12 (4912) 12 ppm Laser 1200 x 1200 Parallel, Ethernet PCL6 PostScript 3 Low cost, entry network printer 20,000 imp/month Network Printer 17 (4317) 17 ipm Laser 300 x 300 600 x 600 Twinax, Parallel, Ethernet, Token Ring IPDS SCS PCL5e PostScript IBM Integrated AFP/IPDS 65,000 imp/month 10 bin mailbox Cut sheet/duplex Appendix E. Printer summary 315 Infoprint 20 (4320) 20 ppm Laser 600 x 600 1200 x 1200 Twinax, Parallel, Ethernet (10/100), Token Ring IPDS SCS PCL5e PostScript 3 IBM Integrated AFP/IPDS 75,000 imp/month 11 by 17 support Cut sheet/duplex Infoprint 21 (4321) 21 ppm Laser 600 x 600 1200 x 1200 Twinax, Parallel, Ethernet (10/100), Token Ring IPDS SCS PCL6 PostScript 3 PDF IBM Integrated AFP/IPDS Integrated web server Label-ready Web-based management IPP-enabled Infoprint 32 (4332 001) 32 ppm Laser 600 x 600 1200 x 1200 Twinax, Parallel, Ethernet (10/100), Token Ring IPDS SCS PCL5e PostScript 3 IBM Integrated AFP/IPDS 150,000 imp/month 11 by 17 support Cut sheet/duplex High-function finisher includes stapling, collation Infoprint 40 (4332 002) 40 ppm Laser 600 x 600 1200 x 1200 Twinax, Parallel, Ethernet (10/100), Token Ring IPDS SCS PCL5e PostScript 3 IBM Integrated AFP/IPDS 200,000 imp/month 11 by 17 support Cut sheet/duplex High-function finisher includes stapling, collation IBM AS/400 printer Speed Technology Resolution Attachment Data stream Features 316 IBM AS/400 Printing V © Copyright IBM Corp. 2000 317 Appendix F. PSF/400 performance results This appendix contains selected results from a PSF/400 V4R2 performance evaluation. The performance evaluation was performed by the IBM Printing Systems Company Performance Group in Boulder, Colorado. F.1 Environment PSF/400 V4R2 printing performance was measured using an AS/400 Model 510/2144 processor with IBM Network Printer 24, IBM Infoprint 60, and IBM Infoprint 4000 printers attached to a dedicated 16 MB Token-Ring. The AS/400 system was totally dedicated to printing with no other processes active except for measurement. The printer Token-Ring was connected only to the AS/400 system and one of the printers at any one time. The AS/400 Model 510/2144 is a low to medium performance system relative to the other current AS/400 models. Based on V4R1 Commercial Processing Workload (CPW) ratings, the Model 510/2144 system's performance compares to other selected models as shown in Table 27. Table 27. Performance comparison of some AS/400 models F.1.1 Software The PSF/400 V4R2 software was preliminary GA level, believed to represent GA level performance. Software parameters relevant to performance were set to: • 10,000 KB Spool (QSPL) Storage • 8 KB Receive Buffer size • 8 KB (NP24) and 32 KB (IP60 and IP4000) Send Buffer sizes • 4096 byte MTU size • 16 KB Maximum Frame size Model V4R1 CPW ratings 500 21.4 to 43.9 600 22.7 to 73.1 510/2144 111.5 620 85.6 to 464.3 530 148.0 to 650.0 640/2237 319.0 640/2238 563.3 640/2239 998.6 650 1,794.0 to 2,743.6 840 16,500 318 IBM AS/400 Printing V F.1.2 Hardware The AS/400 system that was used included this setup: • Model 510 • Processor Type 2144 • 512 MB Memory • 28 GB DASD • 2619-001 IOP/Token-Ring adapter • 16MB Token-Ring Performance was evaluated using three printers, each attached to the AS/400 system by means of a 16 Mb Token-Ring. • Network Printer 24 (NP24/4324): – IPDS, PCL, or PostScript – Cut-sheet – 24 pages per minute (PPM) simplex, 19 PPM duplex – 300 dpi resolution – 20 MB memory • Infoprint 60 (IP60): – IPDS only – Cut-sheet – 60 PPM, both simplex and duplex – 240 and 300 dpi resolution (prints at 600 dpi) – 64 MB of memory • Infoprint 4000 (IP4000): – IPDS only – Continuous forms – 708 PPM (2-up duplex) – 240 dpi resolution – 128 MB of memory This IP4000 “printer” was actually a laboratory device that is based on a real IP4000 control unit, but simulates the paper and imaging hardware of the real IP4000. In function and performance, it represents the IP4000 faithfully except for the lack of printed output. F.2 Methodology The parameters for determining PSF/400 V4R2 performance are: • Time for the first page to print and total job time: These are the elapsed times between job submission and printing the first page of the job, and between job submission and printing the last page of the job. For the IP4000, the time for the first page was assumed to be when the operator panel displayed “Printing”. • Spooled file conversion throughput: This is defined as the rate of converting the spooled file in pages per minute (PPM), from the first page of the job until the last page of the job. This is determined from Start and End time stamps for the spooled file conversion process of PSF/400. • Printer throughput: This is defined as the rate of printing in pages per minute (PPM), from the first page of the job until the last page of the job. Appendix F. PSF/400 performance results 319 Instrumentation was used with the IP4000 to arrive at steady-state printing rates more accurately. • PSF/400 V4R2 use of the AS/400 system processor: This is the use of the processor during both the PSF/400 spooled file conversion and printer driver phases, where appropriate. It is reported both as percent utilization and as processor time (milliseconds) per page converted and printed. The procedure for PSF/400 V4R2 measurements begins with complete isolation of the AS/400 system from all connections other than the printer, and de-activation of all processes other than PSF/400 and the Performance Monitor. Files to be measured have already been placed on the spool. Each measurement is made using this procedure: 1. Print a few pages of the job to be measured to make sure fonts and other resources have been downloaded to the printer before the measurement starts. 2. Start PSF Trace to record start and stop times for the spooled file conversion and printer driver phases of PSF/400. While PSF Trace can have a large effect on performance if it is not used carefully, using it to record this limited data has no measurable effect. 3. Start the Performance Monitor to gather information about processor use while converting and printing. 4. Release the spooled file to be measured, starting a timer at the same time. 5. Record the time at which the first page has been printed and dropped into the output hopper (cut sheet printers) or shown as “printing” (IP4000). 6. Record the time at which the last page has been printed. 7. Stop the Performance Monitor and PSF Trace. 8. Retrieve start, stop, and processor use information for the spooled file conversion and printer driver from information recorded by PSF Trace and Performance Monitor. This information is then processed as a spread sheet, and the results are tabulated. F.3 Performance cases Twenty-three print jobs were used, although not all with any one printer. Fifteen print jobs are native AS/400 applications. Many of these were produced as sample programs for marketing demonstrations (for example, Super Sun Seeds) or are variations of sample programs. AS/400 applications typically specify print-resident fonts. One of the native AS/400 jobs and two others were printed using the host print transform facility of the AS/400 for a total of 26 distinct cases. These cases are shown in Table 28 on page 320 with descriptions of their origins and characteristics. Eight print jobs are from a set of AFP (Performance Reference Pages that have been used to evaluate performance of PSF products and printers for some time). Some of them use downloaded fonts that are not in the current Core Interchange set, which can cause PSF/400 and printers to process the job differently. 320 IBM AS/400 Printing V Applications such as these (common in the MVS environment) typically specify and use downloaded fonts. These jobs represent complex AFP applications. They were imported to the AS/400 spool. Some of these jobs produce output that appears the same as or similar to the output of another job, using a different form with different performance characteristics. These similarities in appearance are noted in the descriptions. Table 28. Names and descriptions of performance cases Case name Case description INVPRE Text with overlay. Produces one version of the Super Sun Seeds invoice application. INVPRE is an SCS application where the invoice overlay has been added using the Printer File (that is, OVRPRTF). Sample output from this test case is shown in Figure 237 on page 334. INVNEW2 Text with overlays and barcodes. Produces an AFP version of Super Sun Seeds involve using DDS. Each invoice can have multiple customized pages. Each invoice has three collated copies—customer, packing list, and file—each of which is different. Sample output from this test case is shown in Figure 238 on page 334. INVNEW2A Same as INVNEW2 but without barcodes. Sample output from this test case is shown in Figure 239 on page 335. INVNEW3 Text with overlays and barcodes. A more sophisticated version of Super Sun Seeds invoice using DDS. Each page is drawn (using dynamic variables in DDS) to match the number of customer transactions. Appearance is the same as INVNEW2. Sample output from this test case is shown in Figure 238 on page 334. INVNEW3A Same as INVNEW3 but without barcodes. Appearance is the same as INVNEW2A. Sample output from this test case is shown in Figure 239 on page 335. INVSCS Text with overlays. Advanced Print Utility (APU) version of Super Sun Seeds invoice application. INVSCS is the original application, creating flat SCS for a preprinted invoice form. Using an APU print definition, the SCS spooled file is transformed into an AFP spooled file. This case uses the AFP spooled file. Sample output from this test case is shown in Figure 240 on page 335. INVPDEF Text with overlays. This is a Super Sun Seeds invoicing application formatted by using page and form definitions. Using an override to the printer file, the original SCS application is switched to line data, and the page definition and form definition are added. Sample output from this test case is shown in Figure 241 on page 336. INVPDEFA An invoicing application similar to INVPDEF, using fewer overlays and different data. Appearance is somewhat similar to INVPDEF. Sample output from this test case is shown in Figure 241 on page 336. SHLFLB 30 labels, with a barcode on each label. This shelf label application was created using the Print Format Utility (a module of AFP Utilities). Sample output from this test case is shown in Figure 242 on page 336. SHLFLBA 30 labels, with a barcode on each. Shelf application from PFU. This is not the same application as SHLFLB, but it is similar in appearance. Sample output from this test case is shown in Figure 242 on page 336. SCS 57 132-character lines of SCS data. Plain text in SCS format. Sample output from this test case is shown in Figure 243 on page 337. Appendix F. PSF/400 performance results 321 SCSA 29 72-character lines of SCS data. Plain text in SCS format. This is not the same application as SCS. Sample output from this test case is shown in Figure 244 on page 337. SCS-PT 57 132-character lines of SCS data printed using passthrough. The appearance is the same as SCS. Sample output from this test case is shown in Figure 243 on page 337. SCS-PTA 29 72-character lines of SCS data printed using passthrough. The appearance is the same as SCSA. This is not the same application as SCS-PT. Sample output from this test case is shown in Figure 244 on page 337. SCS-PDEF 57 132-character lines of unformatted line data. Same text as SCS printed with the same appearance using a page definition. TXT8K-HPT 8000 text characters. This is the TXT08K case done with host print transform (for a PCL printer). The appearance is similar to TXT08K. Sample output from this test case is shown in Figure 245 on page 338. CMLIM-HPT Complex text with IM image, 49325 bytes total. This is the TXTTCMLIM case done with host print transform (for a PCL printer). The appearance is similar to TXTCMLIM. Sample output from this test case is shown in Figure 247 on page 339. INVN2-HPT Text with overlays. This is the INVNEW2 case done with host print transform (for a PCL printer). Appearance is similar to INVNEW2. TXT08K Simple DCF text pages of 8000 text characters each, format off, one downloaded font (gothic). Direction of printing is done, and there are 9346 bytes per page (text and controls). Sample output from this test case is shown in Figure 245 on page 338. TXT32K Simple DCF text pages of 32000 text characters each, format off, one downloaded font (gothic). Direction of printing is done and there are 35951 bytes per page (text and controls). Sample output from this test case is shown in Figure 246 on page 338. TXTCMLIM Complex DCF text pages of 4799 text characters each, two columns of justified text, with eight different downloaded fonts (Sonoran), three tables, and a 5.2 square inch GDDM (ceiled) image on each page. Direction of printing is down, and there are 40325 bytes per page (text, image and controls). Sample output from this test case is shown in Figure 247 on page 339. STMTSHAD Complex billing statement pages using OGL overlays. The overlay contains two images (3.36 square inches total), and 306 text characters, and has 19467 bytes total (text, image, and controls). 47 lines of 12 fields of variable data are printed on each page (using pagedef specifications) for another 7674 bytes per page (text and controls). Seven downloaded fonts are used (Sonoran and Prestige Pica). Note that the overlay is stored in the printer’s memory and does not have to be retransmitted for every page. Sample output from this test case is shown in Figure 248 on page 339. RAST24 Pages containing one simple image page segment of 24 square inches. Page segment source is PMF. 173138 bytes per page (image data and controls). Sample output from this test case is shown in Figure 249 on page 340. Case name Case description 322 IBM AS/400 Printing V F.4 Results Seven tables of performance information follow. Table 29, Table 30 on page 324, and Table 31 on page 325 show the number of pages printed and the performance results for each case used with the NP24, IP60, and IP4000 printers. Table 32 on page 326 and Table 33 on page 328 summarize and compare printing rates and processor use for the three printers. Table 34 on page 330 shows calculated AS/400 Model 510/2144 processor utilization based on measured processor use, at NP24, IP60, and IP4000 maximum printing rates. Table 35 on page 331 compares the performance effects of operating PSF/400 in simultaneous print and convert mode (Print While Convert (PNC) = YES) to operating with PWC=NO. F.4.1 PSF/400 V4R2 with Network Printer 24 The NP24 printer is a cut-sheet printer with 300 dpi resolution, with maximum printing speeds of 24 PPM (simplex) and 19 PPM (duplex) using letter sized paper. Some jobs were printed in duplex, some in simplex, and one (INVSCS) is a mixed simplex and duplex application. All NP24 measurements were made with PWC=NO, which causes printing to wait until spooled file conversion is complete (Table 29). Table 29. NP24 performance with PSF/400 V4R2 (AS400 Model 510/2144): Print While Convert=NO RAST50 Pages containing one simple image page segment of 50 square inches. Page segment source is PMF. 360453 bytes per page (image data and controls). Sample output from this test case is shown in Figure 250 on page 340. CHKSG410 Pages each containing 10CCITT Group 4 compressed 240 dpi IOCA checks of 4.09 square inches each. 43478 bytes per page (image data and controls). Sample output from this test case is shown in Figure 251 on page 341. G479BO52 Pages each containing one 79 square inch CCITT Group 4 compressed 240 dpi IOCA image (5:1 compression). 109819 bytes per page (image data and controls). Sample output from this test case is shown in Figure 252 on page 341. Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot INVPRE 80 :19 3:35 1.811 26 24 .1 8.5 1.9 10.4 INVNEW2 80 :30 4:39 1,644 36 19 .1 13.3 2.0 15.3 INVNEW3 80 :30 4:42 623 37 19 .3 13.1 8.8 21.9 INVSCS 80 :30 6:13 1,733 33 13 .1 11.5 2.1 13.6 INVPDEF 80 :31 4:33 1,314 30 19 .0 13.5 1.3 14.7 SHLFLB(s) 80 :24 3:39 694 78 24 .1 67.6 2.5 70.1 SCS(s) 80 :19 3:34 1,890 51 24 .1 12.0 1.0 13.0 Case name Case description Appendix F. PSF/400 performance results 323 F.4.2 PSF/400 V4R2 with IP60 The IP60 printer is a cut-sheet printer with both 240 dpi or 300 dpi resolutions. It prints at 600 dpi in either case. Its maximum printing speed is 60 PPM in both simplex and duplex when using letter sized paper. Some jobs are printed in duplex, some in simplex, and one (INVSCS) is a mixed simplex and duplex application. The IP60 measurements shown in Table 30 on page 324 were made with PWC=NO, which causes printing to wait until the spooled file conversion is SCS-PT(s) 80 :18 3:33 2,927 59 24 .0 12.0 1.0 13.0 SCS-PDEF 80 :19 3:33 2,133 22 24 .1 6.1 1.5 7.6 TXT8K-HPT 80 :54 4:56 na na 19 4* na na 123.4 CMLIM-HPT 80 :1:15 5:16 na na 19 10* na na 309.8 INVN2-HPT 90 1:30 6:02 na na 19 12* na na 399.0 TXT08K 80 :26 4:35 5,926 54 19 .1 5.5 1.8 7.3 TXT32K 80 :31 4:42 1,638 31 19 .1 11.3 4.0 15.3 TXTCMLIM 80 :32 4:41 1,182 47 19 .2 23.6 5.3 28.9 STMTSHAD 80 :40 6:08 845 61 14 .0 43.4 2.0 45.4 RAST24 80 :41 4:50 612 42 19 .4 41.5 14.4 55.9 RAST50 80 :55 10:01 298 41 8 .6 82.3 43.9 126.1 CHKSG410 80 :36 4:44 1,069 61 19 .1 34.1 4.1 38.3 G479BO52 80 :43 13:14 740 39 6 .1 31.6 9.6 41.3 Notes: (s) Indicates simplex printing. * Since there are no distinct conversion and printing processes with HPT, all processor use and utilization are shown under “Printing”. • No. Pages: The total number of pages printed for the job. For duplex jobs, this is twice the number of sheets produced by the printer. • Page Times: The number of minutes and seconds (from the time the job was released from the spool) until the first and last pages were printed. • Conversion Rate: The rate in pages per minute at which the spooled file was converted prior to printing. • Conversion Util: The percent of the time the AS/400 processor was busy while the spooled file was being converted. When no other work is using the processor (as in these measurements), the spooled file conversion process uses as much processor time as it can and converts at a high rate. When other processors are running, as they normally are, utilization for conversion is higher and the conversion rate is lower. • Printing Rate: The rate in pages per minute at which pages were printed. • Printing Util: The percent of the time the AS/400 processor is busy while the spooled file is being printed. For a given job, printing utilization is approximately proportional to the printing rate. • Processor Time per Page: The milliseconds of time during which the processor is busy for each page converted, printed, and totalled. This number is independent of the rate at which pages are being printed. Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot 324 IBM AS/400 Printing V complete. IP60 measurements made with PWC=YES were also made. Results are compared to the PWC=NO results in Table 30 on page 324. Table 30. IP60 performance with PSF/400 V4R2 (AS400 Model 510/2144): Print While Convert=NO Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot INVPRE(s) 300 :26 5:25 5,488 47 60 2 5.1 17.5 22.6 INVNEW2 300 :35 6:03 4,478 53 52 1 7.0 11.1 18.1 INVNEW2A 300 :31 5:29 5,028 43 60 1 5.2 11.3 16.4 INVNEW3 300 :32 6:30 4,255 53 50 1 7.5 17.1 24.6 INVNEW3A 300 :35 5:33 4,255 39 60 1 5.4 14.0 19.4 INVSCS 300 :38 1:04 4,216 54 28 1 7.6 25.2 32.8 INVPDEF 78 :32 1:48 1,286 30 60 1 14.0 11.5 25.5 SHLFLB(s) 300 :52 5:50 892 98 60 1 65.8 9.9 75.7 SHLFLBA 300 1:04 6:05 861 93 59 .2 64.5 2.6 67.1 SCS(s) 240 :27 4:26 2,780 69 60 .2 14.9 1.7 16.6 SCS-PT(s) 240 :28 4:47 4,260 77 55 .1 10.8 1.2 12.0 SCS-PDEF(s) 240 :33 4:32 4,528 37 60 .3 4.9 3.4 8,3 TXT08K 320 :31 5:49 - - 60 na 4.3 3.3 7.6 TXT32K 320 :38 5:56 2,365‘ 40 60 1 10.2 7.5 17.7 TXTCMLIM 320 :42 6:00 1,596 49 60 2 18.3 25.0 43.4 STMTSHAD 320 :45 6:03 2,520 71 60 1 17.0 14.6 31.6 RAST24 300 1:17 6:15 641 42 60 2 39.2 24.6 63.8 RAST50 300 1:39 6:37 300 39 60 4 78.0 45.4 123.4 CHKSG410 300 :51 5:49 1,243 76 60 1 36.8 6.5 43.3 G479BO52 300 :59 5:57 874 46 60 1 31.6 15.0 46.6 Appendix F. PSF/400 performance results 325 F.4.3 PSF/400 V4R2 with IP4000 The IP4000 printer is a continuous-forms printer that can be used in either 240 dpi or 300 dpi resolution. For this study, it was used in 240 dpi resolution. Its maximum printing speed is 708 PPM when printing two-up duplex on letter sized pages. All jobs were printed on the IP4000 in two-up duplex. The IP4000 measurements shown in Table 31 were made with PWC=NO, which causes printing to wait until the spooled file conversion is complete. Table 31. IP4000 performance with PSF/400 V4R2 (AS/400 Model 510/2144): Print While Convert=NO Notes: (s) Indicates simplex printing. (msec) Stands for milliseconds or thousandths of a second. • No. Pages: The total number of pages printed for the job. For duplex jobs, this is twice the number of sheets produced by the printer. • Page Times: The number of minutes and seconds (from the time the job was released from the spool) until the first and last pages were printed. • Conversion Rate: The rate in pages per minute at which the spooled file was converted prior to printing. • Conversion Util: The percent of the time the AS/400 processor was busy while the spooled file was being converted. When no other work is using the processor (as in these measurements), the spooled file conversion process uses as much processor time as it can and converts at a high rate. When other processors are running, as they normally are, utilization for conversion is higher and the conversion rate is lower. • Printing Rate: The rate in pages per minute at which pages are printed. • Printing Util: The percent of the time the AS/400 processor is busy while the spooled file is being printed. For a given job, printing utilization is approximately proportional to the printing rate. • Processor Time per Page: The milliseconds of time during which the processor was busy for each page converted, printed, and totalled. This number is independent of the rate at which pages are being printed. Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot INVPRE 3520 :25 5:23 12,857 83 708 25 3.9 21.8 25.6 INVNEW2A 3520 :20 5:18 15,589 81 708 14 3.1 12.5 15.6 INVNEW3A 3520 :25 5:27 12,512 75 708 21 3.6 17.9 21.5 INVPDEFA 804 :10 1:17 9,805 46 708 7 2.8 6.4 9.2 SHLFLBA 3480 4:03 8:58 880 97 708 3 66.2 2.2 68.4 SCSA 3520 :25 5:23 10,353 8 708 1 5.1 0.9 6.1 SCS-PTA 3520 :18 5:16 15,808 9 708 .4 3.5 0.4 3.8 TXT08K 3520 :30 5:28 8,322 58 708 4 4.2 3.3 7.5 TXT32K 2112 :54 5:51 2,715 45 478 4 9.9 5.3 15.3 TXTCMLIM 3520 1:59 9:47 1,880 54 389 17 17.4 23.2 40.5 Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot 326 IBM AS/400 Printing V F.4.4 Comparison: Printing rates using PSF/400 V4R2 on Model 510/2144 The printing rates (with PWC=NO) for NP24, IP60, and IP4000 are compared in Table 32. Explanations of less-than-maximum speed results are included after the table. These rates were achieved when the AS/400 system was doing nothing else, when the Token-Ring was not shared with any other devices, and when the spooled file conversion had already completed. Some of these rates might not be achieved under other circumstances, especially with the high-speed IP4000. Some jobs were not measured on all three printers because of functional differences. Table 32. Printing rates (PPM) for NP24, IP60, and IP400: Print While Convert=NO STMTSHAD 3520 1:06 7:36 3,621 92 538 15 15.3 17.3 32.6 RAST24 1200 2:00 5:59 638 43 288 11 40.6 21.6 62.2 CHKSG410 2300 1:43 5:36 1,427 79 589 6 33.3 6.1 39.5 G479BO52 2000 2:12 8:48 962 50 305 7 31.3 13.3 44.7 Notes: (msec) Stands for milliseconds or thousandths of a second. • No. Pages: The total number of pages printed for the job. For two-up duplex jobs, this is four times the number of sheets produced by the printer. • Page Times: The number of minutes and seconds (from the time the job was released from the spool) until the first and last pages were printed. • Conversion Rate: The rate in pages per minute at which the spooled file was converted prior to printing. • Conversion Util: The percent of the time the AS/400 processor is busy while the spooled file is being converted. When no other work is using the processor (as in these measurements), the spooled file conversion process uses as much processor time as it can and converts at a high rate. When other processors are running, as they normally are, utilization for conversion is higher and the conversion rate is lower. • Printing Rate: The rate in pages per minute at which pages are printed. • Printing Util: The percent of the time the AS/400 processor is busy while the spooled file is being printed. For a given job, printing utilization is approximately proportional to the printing rate. • Processor Time per Page: The milliseconds of time during which the processor is busy for each page converted, printed, and totalled. This number is independent of the rate at which pages are being printed. Case NP24 IP60 IP4000 INVPRE (s)24 (s)60 708 INVNEW2 19 52 5 - INVNEW2A - 60 708 INVNEW3 19 50 5 - INVNEW3A - 60 708 INVSCS 13 6 28 6 - Case No. of pages Page times Conversion Printing Processor Time (mins:secs) Rate Util Rate Util per page (msec) First Last (PPM) (%) (PPM) (%) Cvt Prt Tot Appendix F. PSF/400 performance results 327 INVPDEF 19 60 - INVPDEFA - - 708 SHLFLB (s)24 (s)60 - SHLFLBA - 59 5 708 SCS (s)24 (s)60 - SCSA - - 708 SCS-PT (s)24 (s)55 5 - SCS-PTA - - 708 SCS-PDEF (s)24 (s)60 - TXT08K-HPT 19 - - CMLIM-HPT 19 - - INVN2-HPT 19 - - TXT08K 19 60 708 TXT32K 19 60 478 1 TXTCMLIM 19 60 389 3 STMTSHAD 14 1 60 538 3 RAST24 19 60 288 2 RAST50 8 1 60 - CHKSG410 19 60 589 2 G479BO52 6 1 60 305 2 (s) Printing in simplex. 1. Limited by printer control unit capability. That is, this model of the IP4000 is not capable of printing this case at maximum speed of 708 PPM. 2. Limited by Token-Ring attachments. The theoretical limitation of the 16 Mb per second Token-Ring is 2 MB per second and the practical data rate limitation of TCP/IP and the Token-Ring, including the effects of this printer's Token-Ring adapter, is below that. For example, printing the RAST24 case at 708 PPM requires an average sustained data rate to the printer of slightly more than 2 MB per second because of the amount of data contained in each page. 3. Limited by PSF/400 processing of downloaded fonts while printing. This job prints at a rated speed if printer-resident fonts are used. A PSF/400 improvement not yet available also eliminates this limitation. 4. Limited by PSF/400 processing of downloaded fonts. This job prints faster but not at a maximum speed (because of printer control unit limitations) if printer-resident fonts are used. A PSF/400 improvement not available in V4R2 also eliminates this limitation. 5. Limited by mechanical problems in the IP60, which prevented paper from being provided for each print cycle. The IP60 is capable of printing this job at 60 PPM. 6. Limited by switching between simplex and duplex. Case NP24 IP60 IP4000 328 IBM AS/400 Printing V F.4.5 Comparison of processor requirements Processor requirements for printing are summarized in Table 33. The amounts of processor time used to convert and print each page have been calculated from the times for the entire file, and then added together to show the total processor time needed to convert and print each page. These times are shown in milliseconds of processor time per page. Processor time to convert these cases is generally larger than the processor time to print, although there are exceptions. Those applications that use relatively large amounts of processor time per page to convert may convert slowly (maybe more slowly than the maximum speed of the printer), especially if the AS/400 processor is less powerful or is heavily used for other purposes than printing. The throughput of some applications can be limited by how fast the spooled file conversion can run, especially with high-speed or large numbers of printers, or with small or heavily loaded AS/400 systems. This can cause jobs to print more slowly than expected and continuous forms printers to pause. Table 33. Processor usage for printing in milliseconds per page: Print with Convert=NO Case AS/400 Processor milliseconds per page Model 519/2144 (measured) Model 640/2237** IP60 IP4000 IP400 Cvt Prt Tot Cvt Prt Tot Cvt Prt Tot INVPRE 5.1 17.5 22.6 3.9 21.8 25.6 1.4 7.6 8.9 INVNEW2 7.0 11.1 18.1 - - - - - - INVNEW2A 5.2 11.3 16.4 3.1 12.5 15.6 1.1 4.4 5.5 INVNEW3 7.5 17.1 24.6 - - - - - - INVNEW3A 5.4 14.0 19.4 3.6 17.9 21.5 1.3 6.3 7.5 INVSCS 7.6 25.2 32.8 - - - - - - INVPDEF 14.0 11.5 25.5 - - - - - - INVPDEFA - - - 2.8 6.4 9.2 1.0 2.2 3.2 SHLFLB 65.8 9.9 75.7 - - - - - - SHLFLBA 64.5 2.6 67.1 66.2 2.2 68.4 23.1 0.8 23.9 SCS 14.9 1.7 16.6 - - - - - - SCSA - - - 5.1 0.9 6.1 1.8 0.3 2.1 SCS-PT 10.8 1.2 12.0 - - - - - - SCS-PTA - - 0 3.5 0.4 3.8 1.2 0.1 1.3 SCS-PDEF 4.9 3.4 8.3 - - - - - - TXT08K-HPT - - - - - - - -- CMLIM-HPT - - - - - - - -- INVN2-HPT - - - - - - - -- TXT08K 4.3 3.3 7.6 4.2 3.3 7.5 1.5 1.2 2.6 Appendix F. PSF/400 performance results 329 The SHLFLB case, consisting of 30 barcodes per page and nothing else, uses a significant amount of processor time. Differences in processor time between INVNEW2 and INVNEW2A, and between INVNEW3 and INVNEW3A, which differ mostly by inclusion of less than one barcode per page, also support the conclusion that applications that use BCOCA barcodes heavily require significant processor time and may not convert at high-speed printer rates. Image-intensive applications, such as RAST24, RAST50, and CHKSG410, also use significant amounts of processor time per page, mostly due to the amounts of data that must be processed. These applications may also convert more slowly than the maximum speeds of some printers. Using image compression can minimize this effect. F.4.6 Predictions of processor utilizations at printing speeds Calculated processor utilizations for printing at various aggregate rates on two different AS/400 models are shown in Table 34 on page 330. Model 510/2144 has a V4R1 CPW rating of 111.5, and Model 640/2238 (a 2-way system) has a V4R1 CPW rating of 583.3 (about 5.2 times as powerful). Using a more powerful system has the effect of reducing the average processor utilization needed to print a particular file at a certain rate by the difference in processing power, in this case, approximated by the difference in CPW ratings of the two AS/400 models. The utilizations represent the average processor utilization needed to convert and TXT32K 10.2 7.5 17.7 9.9 5.3 15.3 3.5 1.9 5.3 TXTCMLIM 18.3 25.0 43.4 17.4 23.2 40.5 6.1 8.1 14.2 STMTSHAD 17.0 14.6 31.6 15.3 17.3 32.6 5.3 6.0 11.4 RAST24 39.2 24.6 63.8 40.6 21.6 62.2 14.2 7.5 21.7 RAST50 78.0 45.4 123.4 - - - - - - CHKSG410 36.8 6.5 43.3 33.3 6.1 39.5 11.6 2.1 13.8 G479BO52 31.6 15.0 46.6 31.3 13.3 44.7 10.9 4.6 15.6 * These particular HPT jobs, which are measured using the default customization table (this includes the “mapping” option), require large amounts of processor time to print. This is required for conversion from AFPDS to PCL to allow printing on PCL printers (on the NP24 in PCL mode, in this case). For comparison, see the processor time per page for the same jobs printed directly to the NP24 in IPDS mode (TXT08K, TXTCMLIM, and INVNEW2). ** The processor times per page for the AS/400 Model 640/2237 are not measured results. They were extrapolated from the Model 510/2144 results using the V4R1 CPW ratings for the two models (111.5 and 319.0) to demonstrate the effect of using a more powerful processor. Extrapolations to less powerful processors result in proportionally larger processor milliseconds per page, according to the ratio of their CPW ratings and the Model 510/2144 CPW rating. Case AS/400 Processor milliseconds per page Model 519/2144 (measured) Model 640/2237** IP60 IP4000 IP400 Cvt Prt Tot Cvt Prt Tot Cvt Prt Tot 330 IBM AS/400 Printing V print each application at the maximum speeds of the three printers. As you can see from previous tables, not all of these applications print on all three printers (for example, the HPT ones), and some that print on a particular printer do not print at maximum speed. Furthermore, some that printed at maximum speed might not on different AS/400 configurations or under different circumstances. The utilizations represent the theoretical processor loads if each application is printed at the speeds shown. Where predicted utilizations are high, especially when over 100%, the application requires more processor power than the AS/400 model shown to print at the desired speed, even with no other loads on the system. Note that this does not guarantee the ability to convert and print these applications at the indicated speeds (on these AS/400 configurations or any other). Table 34. Predicted total processor utilization for printing, in percent: Print While Convert=NO Case Calculated AS/400 processor utilization Model 510/2144 Model 640/2238 60PPM 480PPM 960PPM 60PPM 480PPM 960PPM INVPRE 2.3 18.1 36.2 0.4 3.5 6.9 INVNEW2 1.8 14.8 29.0 0.3 2.8 5.5 INVNEW2A 1.6 13.1 26.3 0.3 2.5 5.0 INVNEW3 2.5 19.7 39.3 0.5 3.8 7.5 INVNEW3A 1.9 15.6 31.1 0.4 3.0 5.9 INVSCS 3.3 26.3 52.5 0.6 5.0 10.0 INVPDEF 2.6 20.4 40.8 0.5 3.9 7.8 INVPDEFA 0.9 7.4 14.8 0.2 1.4 2.8 SHLFLB 7.6 60.6 121.1 1.4 11.6 23.2 SHLFLBA 6.7 53.7 107.4 1.3 10.3 20.5 SCS 1.7 13.3 26.6 0.3 2.5 5.1 SCSA 0.6 4.9 9.8 0.1 0.9 1.9 SCS-PT 1.2 9.7 19.3 0.2 1.8 3.7 SCS-PTA 0.4 3.1 6.1 0.1 0.6 1.2 SCS-PDEF 0.8 6.7 13.3 0.2 1.3 2.5 TXT08K-HPT 12.3 98.7 197.4 2.4 18.9 37.7 CMLIM-HPT 31.0 247.9 495.7 5.9 47.4 94.8 INVN2-HPT 39.9 319.2 638.4 7.6 61.0 122.O TXT08K 0.8 6.4 12.2 0.1 1.2 2.3 TXT32K 1.8 14.4 28.3 0.3 2.7 5.4 TXTCMLIM 4.3 34.4 69.4 0.8 6.6 13.3 Appendix F. PSF/400 performance results 331 F.4.7 Print While Convert (PWC)=Yes compared to PWC=NO Most PSF/400 V4R2 measurements were done with PWC=NO for repeatability and control of the experiments, but PSF/400 Spool File Conversion is normally done while printing (PWC=YES). Measurements were made to compare PWC=YES performance to PWC=NO performance using the IP60 printer. Selected results are compared in Table 35. The general differences are: • Time to first page is shorter with PWC=YES. This is no surprise, since printing can start before the entire file is converted. • Conversion rates are generally a little slower with PWC=YES, but not always in these measurements (this may reflect on the accuracy of the measurements). This might also be expected, since the conversion and printing processes are competing for processor and other resources. However, this difference is not large in this dedicated environment where other demands do not exist. Other consistent differences are not obvious. The total processor time for converting and printing is generally about the same for both cases. Table 35. Print While Convert (PWC) YES compared to PWC NO: AS/400 Model 510/2144 STMTSHAD 3.2 25.6 50.6 0.6 4.8 9.7 RAST24 6.4 51.2 102.1 1.2 9.8 19.5 RAST50 12.3 98.4 197.4 2.4 18.9 37.7 CHKSG410 4.3 34.4 69.3 0.8 6.6 13.2 G479BO52 4.7 37.6 74.6 0.9 7.1 14.3 Case First page times Conversion Average Processor time (mins:secs) Rates (PPM) Utilizations per Page No Yes No Yes No Yes No Yes INVPRE :26 :27 5,488 3,724 2 2 22.6 20.3 INVNEW2 :35 :32 4,478 4,157 2 2 18.1 17.2 INVNEW3 :32 :38 4,255 3,947 2 2 24.6 19.9 INVSCS :38 :42 4,216 3,871 2 1 32.8 30.1 INVPDEF :32 :41 1,286 1,333 2 2 25.5 25.1 SHLFLB :52 :28 892 799 7 7 75.7 75.2 SCS :27 :27 2,780 2,764 2 2 16.6 19.9 SCS-PT :28 :22 4,260 4,022 1 1 12.0 12.1 SCS-PDEF :33 :30 4,528 4,103 1 1 8.3 8.3 Case Calculated AS/400 processor utilization Model 510/2144 Model 640/2238 60PPM 480PPM 960PPM 60PPM 480PPM 960PPM 332 IBM AS/400 Printing V F.5 Application of results Some practical conclusions and observations can be made from this information: • Most of the printing applications used in these measurements, particularly the native AS/400 applications, are practical on at least one high speed printer such as the IP4000, given a powerful enough AS/400 system and spare capacity. • It is possible to determine the amount of processor power needed to print at a certain rate (for example, on two IP60 printers at 120 PPM) using the information in this appendix. Where processors other than the Model 510/2144 are involved, CPW ratings are used to adjust the data to get an approximate answer. Where large numbers of printers are involved, and where other key applications place heavy requirements on the AS/400 system, you must use more care with this approach. • Characteristics of applications have large effects on throughput, on the AS/400 power required to achieve it, on the printer attachment bandwidth needed, and on a printer's ability to print at its maximum rate. Some applications, then, may be too demanding to print on high speed or large numbers of printers, using slow or heavily loaded AS/400 systems. The processor power needed to convert and print at a given rate depends almost entirely on application characteristics, and not on the printer. In particular, these applications may require more of an AS/400 processor than others, and may be more likely to print at less than the maximum speeds of some printers. – Those using significant numbers of barcodes implemented in BCOCA. The spooled file conversion may use a lot of processor time and run at a slow rate (PPM). However, BCOCA applications implemented using Page Definition support in Page Printer Formatting Aid (PPFA) can be much more efficient than the applications used here, because PPFA produces BCOCA objects that can require significantly less processing by PSF/400. TXT08K :31 :33 - 5,439 - 1 7.6 7.6 TXT32K :38 :30 2,365 2,308 2 2 17.7 17.7 TXTCMLIM :42 :37 1,596 1,479 4 4 43.4 42.5 STMTSHAD :45 :37 2,520 2,212 3 3 31.6 28.1 RAST24 1:17 :36 641 605 6 6 63.8 59.5 RAST50 1:39 :35 300 293 11 14 123.4 25.6 CHKSG410 :51 :37 1,243 1,266 4 4 43.3 40.0 G479BO52 :59 :32 874 856 4 5 46.6 46.3 Case First page times Conversion Average Processor time (mins:secs) Rates (PPM) Utilizations per Page No Yes No Yes No Yes No Yes Appendix F. PSF/400 performance results 333 – Those using significant amounts of image data, especially if it is not compressed. The spooled file conversion may use a lot of processor time and run at a slow rate, and attachment limitations may prevent data from being delivered to a printer at the rate needed to print at its maximum speed. – Those using host print transform to print an AFPDS spooled file on a PCL printer. – Those using downloaded fonts on every page with a printer that supports both downloaded raster and downloaded outline fonts (that is, all “AFCCU” printers such as the IP60 and IP4000, or any other printer that supports both the LF1 and LF3 font subsets). Applications that use printer-resident fonts do not need this additional processing, and a planned improvement to PSF/400 will reduce processor use for applications that download fonts. • The spooled file conversion may use much more processor resource than printing. This can limit printing throughput with combinations of high speed or large numbers of printers, and with slow or heavily loaded AS/400 systems. • The data in this appendix can also be adjusted to approximate the effects of multiple printers, other printers, or other AS/400 models, for example: – An application, such as INVPDEFA, is expected to need about 11% of a Model 510/2144 AS/400 system to print at 708 PPM. For the same application, two 708 PPM printers are expected to need about 22% of the processor. – An application, such as the SHLFLB application, which uses about 90% of the system to print at 708 PPM, is not feasible for two 708 PPM printers (it is not really feasible for one unless almost the entire processor is available for printing) on a Model 510/2144 AS/400 system. – An application, such as the SHLFLB application, if printed on an AS/400 Model 650/2240 (V4R1 CPW rating of 1,794.0), needs only about 5% utilization of the eight processors in that model instead of almost 90% utilization on the Model 510/2144 when printing on a single 708 PPM IP4000. F.6 Sample output Figure 237 on page 334 through Figure 252 on page 341 show examples of the output from the test cases described in this appendix. The quality of these illustrations is not representative of the high quality output produced from PSF/400, but is a function of the processes used to produce this publication. 334 IBM AS/400 Printing V Figure 237. INVPRE Figure 238. INVNEW2 and INVNEW3 Appendix F. PSF/400 performance results 335 Figure 239. INVNEW2A and INVNEW3A Figure 240. INVSCS 336 IBM AS/400 Printing V Figure 241. INVPDEF and INVPDEFA Figure 242. SHLFLB and SHLFLBA Appendix F. PSF/400 performance results 337 Figure 243. SCS and SCS-PT Figure 244. SCSA and SCS-PTA 338 IBM AS/400 Printing V Figure 245. TXT08K and TXT8K-HPT Figure 246. TXT32K Appendix F. PSF/400 performance results 339 Figure 247. TXTCMLIM and CMLIM-HPT Figure 248. STMTSHAD 340 IBM AS/400 Printing V Figure 249. RAST24 Figure 250. RAST50 Appendix F. PSF/400 performance results 341 Figure 251. CHKSG410 Figure 252. G479B052 342 IBM AS/400 Printing V © Copyright IBM Corp. 2000 343 Appendix G. Advanced Print Utility implementation case study This appendix helps you implement a typical printing solution from start to finish. The project involves the conversion from pre-printed, continuous forms stationery to plain, cut-sheet, laser-printed pages. The solution is based on Advanced Function Presentation and, in particular, the Advanced Print Utility (APU) program product. In addition to printing enhanced copies of your documents, it offers the foundation for related activities, such as faxing, viewing, and archiving. There are several useful references for using APU itself: • Chapter 2, “Advanced Function Presentation” on page 35 • Advanced Print Utility User’s Guide, S544-5351 • AS/400 Guide to AFP and PSF, S544-5319 (Chapter 12) In particular, you will find it useful to work through the tutorial in the User’s Guide. Once you have the basic skills needed, you can adopt some of the hints and tips described at the end of this chapter. G.1 Ordering printers This section provides details of three typical printer configurations: • Low End: For printing AFP jobs and occasional PC LAN jobs • Departmental: Ability to print more complex AFP and PC LAN jobs • Production: Can print complex AFP production jobs plus PC LAN jobs, segregated by input and output bins G.1.1 Low-end printer: IBM Network Printer 12 This printer configuration can accept AFP print jobs and seamlessly print PC jobs from a LAN. If required, it can be expanded with additional paper trays, a duplex unit, and more memory. See Table 36. Table 36. IBM Network Printer 12 hardware expansions G.1.2 Departmental printer: IBM Infoprint 21 This printer is suitable for printing more complex AFP jobs, as well as PC LAN jobs. The extra paper tray provides flexibility, for example different colored paper or a pre-printed letterhead. The duplex unit enables duplex printing, for example Product/feature name Feature code 4312 printer Model 001, 002, 003 depending on country voltage (120, 220 or 100 V) IPDS SIMM 4820 Extra 8 Mb memory 4308 Network Interface Card - 1 of: Token Ring Ethernet 10BaseT/2 Fast Ethernet 10/100 Base TX Twinax SCS 4120 4161 4402 4141 344 IBM AS/400 Printing V printing an AFP overlay of terms and conditions on the back or simply reducing paper use for PC word-processing documents. See Table 37. Table 37. IBM Infoprint 21 hardware expansions G.1.3 AS/400 production printer and PC LAN departmental printer This configuration provides a fast, well-equipped printer suitable for use as the main production printer for a small company or one of several departmental printers in a larger enterprise. The numerous input drawers and output bins provide great flexibility in paper handling. The hard drive provides a copier-like “Repro” facility for generating multiple copies of PC jobs without additional printer processing. See Table 38. Table 38. AS/400 production printer and PC LAN departmental printer hardware expansions Product/feature name Feature code 4322 printer Model 001 (low voltage) Model 002 (high voltage) IPDS SIMM 4820 Extra 16Mb memory 4316 Network Interface Card - 1 of: Token Ring Ethernet 10BaseT/2 Fast Ethernet 10/100 Base TX Twinax SCS 4120 4161 4162 4141 Duplex Unit 4402 Additional Input Drawer and Tray 4501 Note: The AS/400 print kit that is available, which includes Ethernet and IPDS, is a single package. Product/feature name Feature code 4332 printer Model 004 (low voltage) Model 005 (high voltage) IPDS SIMM 4820 Extra 32Mb memory 4332 Network Interface Card - 1 or 2 of: Token Ring Ethernet 10BaseT/2 Fast Ethernet 10/100 Base TX Twinax SCS 4120 4161 4162 4141 Duplex Unit 4402 2,500 sheet input unit 4520 2,000 sheet finisher 4620 (low voltage) 4621 (high voltage) Face-up output tray 4630 Hard Drive 4320 Appendix G. Advanced Print Utility implementation case study 345 G.2 Ordering and obtaining software The following software is required: • Print Services Facility/400 (PSF/400) • IBM AFP PrintSuite for AS/400, Advanced Print Utility feature • AFP Font Collection The following software is useful but not essential: • AFP Utilities/400 • IBM AFP Driver for Windows • Client Access/400, Operations Navigator feature Note that ValuPak for AS/400 Printing (5769-PPK) includes the following software products: • IBM AFP PrintSuite for AS/400, Advanced Print Utility feature • IBM AFP PrintSuite for AS/400, PPFA/400 feature • AFP Font Collection • AFP Utilities/400 Note: At V4R5, AFP Font Collection is included with new orders of PSF/400. G.2.1 Checking whether the software is already installed On an OS/400 command line, type: DSPSFWRSC A screen similar to the example shown in Figure 253 appears. Figure 253. Display Software Resources Note: The Enhanced Print Kit combines the Ethernet and IPDS features. G.2.1.1 PSF/400 Page down through the list, and look for an entry similar to the example in Figure 254 (OS/400 V4R4) or Figure 255 on page 346 for releases prior to OS/400 V4R4. Both screens confirm that you have PSF/400 installed. Display Software Resources System: DEMO720A Resource ID Option Feature Description 5769999 *BASE 5050 AS/400 Licensed Internal Code 5769SS1 *BASE 5050 Operating System/400 5769SS1 *BASE 2924 Operating System/400 5769SS1 1 5050 OS/400 - Extended Base Support 5769SS1 1 2924 OS/400 - Extended Base Support 5769SS1 2 5050 OS/400 - Online Information 5769SS1 2 2924 OS/400 - Online Information 5769SS1 3 5050 OS/400 - Extended Base Directory Support 346 IBM AS/400 Printing V Figure 254. PSF/400 installed confirmation screen: OS/400 V4R4 Figure 255. PSF/400 installed confirmation: Releases prior to OS/400 V4R4 G.2.1.2 AFP PrintSuite/400: APU feature Page down through the list, and look for an entry similar to the example shown in Figure 256. Figure 256. APU feature list display You can also access the main menu by typing the command: GO QAPU/APU If you do not see option 8 (Configure APU Monitor Action), you are using the V3R7M0 (or V3R2M0) product version. Contact your IBM representative to order the no-charge maintenance upgrade of V3R7M1. G.2.1.3 AFP Font Collection You might have noticed AS/400 AFP font products installed on your system in the above displays. The AFP Font Collection is not installed as a licensed program product and, therefore, does not show up in these displays. Typically the various font libraries are installed into libraries such as QFNT300LA1. To check whether these libraries are present, type: WRKLIB QFNT* If the only library displayed is QFNTCPL, this contains the original 240-pel fonts supplied with OS/400 and is unlikely to be of use. You may also see libraries QFNT00 to QFNT15 and QFNT61 to QFNT65. These also contain 240-pel fonts. Assuming you use 300-pel fonts, the only sure way to check for the presence of these fonts on your system is to type: WRKFNTRSC FNTRSC(*ALL/*ALL) OBJATR(FNTCHRSET) From the list of fonts returned, use option 5 (Display attributes) to check the pel density of the selected font. If you do not have any 300-pel fonts installed, you need to order the AFP Font Collection. Resource ID Option Feature Description 5769SS1 36 5112 OS/400 - PSF/400 1-20 IPM Printer Support Resource ID Option Feature Description 5769SS1 17 5102 OS/400 - Print Services Facility 5769SS1 17 2924 OS/400 - Print Services Facility Resource ID Option Feature Description 5798AF3 *BASE 5050 AFP PrintSuite for AS/400 5798AF3 *BASE 2924 AFP PrintSuite for AS/400 5798AF3 1 5101 Advanced Print Utility for AS/400 5798AF3 1 2924 Advanced Print Utility for AS/400 Appendix G. Advanced Print Utility implementation case study 347 G.2.1.4 AFP Utilities/400 Use the DSPSFWRSC command again, and look for a screen similar to the example shown in Figure 257. Figure 257. AFP Utilities/400 resource display G.2.1.5 IBM AFP Driver for Windows From your chosen Windows PC, select Add Printer. Follow the wizard instructions through the first few screens until the printer manufacturer and model display appears, which is shown in Figure 258. Figure 258. Printer manufacturer and model display If you see the display shown in Figure 258, you have at least one of the IBM AFP Drivers installed or have the ability to install it. G.2.1.6 Client Access/400: Operations Navigator feature Click Start->IBM Client Access. Verify that the AS/400 Operations Navigator appears in the list of components. You can use either Client Access Express for Windows or the original Client Access for Windows 95/NT. For further information about Operations Navigator, refer to the Client Access documentation and Managing AS/400 V4R4 with Operations Navigator, SG24-5646. You may also find outline (resolution-independent) fonts installed (with a pel density attribute of “OUTLINE”). These are a good choice of font, but you must ensure your printer is capable of using them. Examples include IBM Infoprint 20, 21, 32, and 40. Note Resource ID Option Feature Description 5769AF1 *BASE 5050 IBM AFP Utilities for AS/400 5769AF1 *BASE 2924 IBM AFP Utilities for AS/400 348 IBM AS/400 Printing V G.3 Installing the software All the software may be installed “in-flight” without affecting system operations. We recomment that you follow this sequence: 1. PSF/400 2. AFP Utilities/400 3. AFP Font Collection 4. Advanced Print Utility Instructions for installing each software product are included in the “Program Directory” page shipped with the product. However, a quick guide to the installation is covered in the following section. G.3.1 PSF/400 On a command line, type: GO LICPGM Select option 11 (Install licensed programs). For V4R3 and earlier versions, install Option 17 (Print Services Facility/400). For V4R4 and later versions, install option 36, 37 or 38, depending on which software tier you purchased. See Figure 259. Figure 259. V4R4 and higher install options G.3.2 AFP Utilities/400 This product may be on the same CD as the PSF/400 feature. Again, go to the Install Licensed Programs menu. You will install product 5769-AF1. This may be at release V4R2 or V4R4. G.3.3 AFP Font Collection There is no need to install all the 70 and greater font libraries on the CD or tape media. The most likely ones you will want to install are listed in Table 39, together with their name on the CD-ROM and an explanation of what they contain. See 4.4.1, “Making the fonts available” on page 97, for a font utility that assists in installing the AFP Font Collection. Table 39. Commonly installed font libraries AS/400 font library name File name on CD-ROM media What they contain When to install QFNTCDEPAG CDEPAG Code Pages Always QFNT300CPL 300CPL 300-pel versions of the standard OS/400 fonts in QFNTCPL If printing to 300-pel printers 1 3 Licensed Product Option Program Option Description 5769SS1 36 OS/400 - PSF/400 1-20 IPM Printer Support 5769SS1 37 OS/400 - PSF/400 1-45 IPM Printer Support 5769SS1 38 OS/400 - PSF/400 Any Speed Printer Support Appendix G. Advanced Print Utility implementation case study 349 Additional font libraries you may want to install are listed in Table 40. The 240-pel versions of these libraries are also available. Table 40. Additional font libraries QFNT300LA1 300LA1 300-pel Expanded Core fonts for the Latin 1 language group If printing to 300-pel printers and using the Latin1 language group 2 QFNT240LA1 240LA1 240-pel Expanded Core fonts for the Latin 1 language group If printing to 240-pel printers 3 QFNTOLNLA1 OLNLA1 Outline fonts for the Latin 1 language group If printing to printers capable of using downloaded outline fonts 3 Notes: 1. Or higher-resolution printers emulating 300-pel printers 2. The various language groups and the languages they support are defined in the Program Directory. Therefore, you might install a different font library, for example QFNT300LA3. 3. To determine the relevant characteristics of your printer, refer to the table in Appendix E, “Printer summary” on page 313. AS/400 font library name File name on CD-ROM media What they contain What they provide QFNT300OCR 300OCR 300-pel Optical Character Recognition fonts Support for OCR characters and additional monospaced fonts QFNTOLNOCR OLNOCR Outline Optical Character Recognition fonts Support for OCR characters and/or additional monospaced fonts QFNT300APL 300APL 300-pel APL programming language fonts Support for APL characters and/or additional monospaced fonts QFNTOLNAPL OLNAPL Outline APL programming language fonts Support for APL characters and additional monospaced fonts QFNT300BM BM300 300-pel IBM BookMaster fonts To provide additional monospaced fonts QFNTOLNBM BMOLN Outline IBM BookMaster fonts To provide additional monospaced fonts QFNT300SYM SYM300 300-pel Symbols fonts To provide additional scientific, mathematical and special purpose characters AS/400 font library name File name on CD-ROM media What they contain When to install 350 IBM AS/400 Printing V G.3.4 Advanced Print Utility You cannot install this product through the GO LICPGM menu. Instead, use the following two commands (assuming the media is CD-ROM in a device named OPT01): • RSTLICPGM LICPGM(5798AF3) DEV(OPT01) OPTION(*BASE) • RSTLICPGM LICPGM(5798AF3) DEV(OPT01) OPTION(1) The most current release of this product is V3R7M1. G.3.5 Additional steps that may be required The software is now installed and ready to use. The following steps customize the software according to your local requirements. G.3.5.1 Setting the APU defaults Go to the main APU menu which is accessed by typing: GO QAPU/APU While you have a command line present, add any required font libraries to your library list, for example: ADDLIBLE QFNTCDEPAG ADDLIBLE QFNT300LA1 Create libraries for the APU print definitions and AFP resources, for example: CRTLIB LIB(APUDATA) TEXT(‘APU Print Definitions') CRTLIB LIB(IMAGES) TEXT(‘AFP Images’) CRTLIB LIB(OVERLAYS) TEXT(‘AFP Overlays’) Select option 6 (Set APU Defaults) and fill in the fields as desired. An example is shown in Figure 260. QFNTOLNSYM SYMOLN Outline Symbols fonts To provide additional scientific, mathematical and special purpose characters AS/400 font library name File name on CD-ROM media What they contain What they provide Appendix G. Advanced Print Utility implementation case study 351 Figure 260. Set APU Defaults example G.3.5.2 Program temporary fixes for APU While you are setting up your APU environment, consider ordering PTF SF62571. This may be loaded and applied immediately. The PTF fixes several minor APU problems, including AFP overlay and page segments moving with APU page margins, and only one SCS spooled file being displayed when selecting a spooled file for creating print definitions. G.3.5.3 APU font database synchronization If for any reason you did not install APU after installing the AFP Font Collection, the internal fonts database that APU uses will not reflect what is installed on the system. This situation might also arise if you later add a font library, or add custom fonts. To re-synchronize the fonts database, type: CALL QAPU/QYPUSYNC This may take a few minutes to run. You can run it at any time, but if the fonts are in use by an application (for example, being applied in an APU print definition) the program may fail. Wait a few moments, and then call the program again. G.4 Designing electronic documents There are many guides to creating printed documents, from typographer’s guides to magazine articles. While a complete understanding of this extensive skill is not required, a few simple guidelines will improve the quality and comprehension of your documents. First decide whether you are directly replicating your pre-printed stationery, or completely re-designing it. The former is easier, but this is also an ideal time to bring your documents up-to-date, so perhaps a combination of the two would be appropriate. For example you might keep the general look of an invoice, but with a new company logo. A more radical change would be to move from a landscape to portrait format. This will make the presentation more consistent, from paper to fax, for example. Now is a good time to question whether the information you have on the form really needs to be there, whether it is on the pre-printed form or printed from the application. You can also cut down on the number of additional Set APU Defaults Typechoices,pressEnter. Unit of measure . . . . *CM *INCH, *CM, *ROWCOL, *UNITS Decimal point character . . or , Font family . . . . . . COURIER LATIN1 Value F4 for List Color . . . . . . . . . *DEFAULT *DEFAULT, Value F4 for List Definition library . . APUDATA Name Code Page . . . . . . . T1V10285 Name F4 for List Addl. resource libs. . IMAGES Name OVERLAYS Name Name Name Job description . . . . QYPUJOBD Name Library . . . . . . . *LIBL Name, *LIBL 352 IBM AS/400 Printing V copies generated. Conversely, you can determine if an additional, automatic copy of the document would benefit your organization. Ideally, choose a single document (such as an invoice) and re-design it, keeping in mind that a similar document, such as a credit note or purchase order, may have slightly different fields. Allow space and the correct registration for addresses, especially if you use window envelopes. White (empty) space does not necessarily have to be filled and often adds clarity. If you or another department have a “mock-up” prepared using a Windows application, remember that this can form the basis for the actual AFP overlay, using the IBM AFP Driver for Windows. However most users tend to cram too much detail onto such forms and in particular do not consider registration of addresses within window envelopes (and concealment of confidential data away from the window). If the mock-up design does not contain more advanced features such as curved boxes and lines, you will later find it much easier to map text using APU if the AFP overlay is constructed using AFP Utilities/400. The latter method will also construct a much more efficient overlay, in terms of printing performance. G.4.1 Which fonts to use Fonts are a potential area of conflict, between the wishes of the marketing department and the document ease of creation. Sans serif fonts, such as Helvetica make bold headings, while a serif font, such as Times New Roman, provides a more formal look and makes large areas of text easier to read. An example of the latter might be Terms and Conditions printed as an AFP overlay on the back of an invoice. Numeric data needs to be in a monospaced font (where every character is of equal width) so that the figures align. Examples include Courier, Prestige, Gothic Text, Letter Gothic, and IBM BookMaster. Using fonts within the IBM AFP Font Collection will pay dividends if and when you move to alternative means of presentation for example faxing or viewing. Otherwise, you may need to create an AFP version of your corporate font. If this exists as a PostScript (Adobe Type 1) font, you can use the IBM Type Transformer product to create these fonts, in 240, 300, or AFP Outline format. Remember that you will still need a monospaced font if you have columns of figures to be aligned. See Chapter 4, “Fonts” on page 89, for more information on Type Transformer. A common technique is to construct all static areas of the form (for example, the overlay) in a typographic font, such as Helvetica and Helvetica Bold. Then map the variable text using a monospaced font throughout, such as Courier, Letter Gothic or BookMaster. This helps the recipient identify which data applies to them and which data is standard text. This is more appropriate for business documents such as statements, invoices and purchase orders. It is less appropriate for individual documents such as letters. G.5 Creating the resources With the above advice in mind, you can now start to create your company logos, signatures, electronic forms, and fonts. There are many tools available, but the ones provided in the ValuPak for AS/400 Printing should be sufficient. Table 41 compares and contrasts the different tools. Appendix G. Advanced Print Utility implementation case study 353 Table 41. Font creation tools comparison Note that you may use both tools together: AFP Utilities/400 for the bulk of the document, with text, shaded boxes and lines, the IBM AFP Driver to create page Tool Advantages Disadvantages AFP Utilities/400 • Easy to use, Quick learning curve • Call to AFP Viewer provides WYSIWYG view of electronic form • Produces efficient overlays • Overlay source created, stored, and saved as OS/400 objects on the AS/400 system • May be used from any OS/400 5250 session • Easy to correlate overlay elements positioning with that of the variable text • PC-sourced and designed elements, such as company logos or signatures may be created separately but built into the AFP Utilities/400 overlay Only a near-WYSIWYG view in design mode IBM AFP Driver for Windows • May be used from any Windows application • Permits use of advanced design elements, such as curved lines and boxes, angled text, corporate PC fonts, easy access to clip-art, etc. • Allows use of PC word-processor functions, such as spell-checking and text alignment • Requires setup and management of the creation process, for example shared folder, AS/400 database file, and driver installation • Backup and storage of the source Windows documents is a separate process to be managed • May be difficult to correlate characteristics of the overlay with that of the AS/400 system, for example lines per inch • Produces relatively inefficient AFP overlays; complex overlays may print slowly on smaller printers 354 IBM AS/400 Printing V segments of the company logo and signatures, the IBM AFP Driver to create an overlay of Terms and Conditions. Use the appropriate tool for the appropriate task! An additional possibility is to use the “Define boxes” facility within APU itself. This is a very limited method. There is no WYSIWYG facility at all, nor shaded or curved boxes. Even lines must be drawn as boxes. However if your form is very simple, it may be appropriate to use this facility and, therefore, keep all design elements within APU. G.6 Building and testing APU print definitions This step involves mapping the variable text in your spooled files to the new positions in the electronic form. Before you start, we advise that you collect several different examples of your spooled files and place them in a special output queue. One is supplied with APU (QYPUOUTQ in QAPU), but we suggest you create one in the same library you use for your print definitions, for example: CRTOUTQ OUTQ(APUDATA/APUTEST) In addition, create two queues for handling successful and unsuccessful processing of APU print definitions, for example: CRTOUTQ OUTQ(APUDATA/APUOK) CRTOUTQ OUTQ(APUDATA/APUFAIL) Now you need to locate several sample spooled files that you will use to build your new documents. Pick a simple one, a complex one, and any that are slightly different, for example extending to several pages or with different sequences of data. Store them in the APUTEST queue and ensure they have SAVE=*YES set. We will refer to these spooled files as the “original SCS spooled files”. It may be helpful to print them out, but do not worry about the fonts or page rotation. We will use APU to set these as required. Follow the APU User’s Guide to set up the basic elements of the print definition. If you have directly replicated your pre-printed stationery through the AFP overlay, you may not even need to perform any text mapping (“field mapping”). As a minimum, we suggest the following settings: • Print definition name: Same as the spooled file name • Set print definition attributes: Hard-code the page size (for example, US Letter 11 by 8.5 inches or A4 size 11.69 x 8.27 inches), the page rotation (probably 0 or 90), and the margins (set to 0). Set the default font family to a monospaced font, such as Courier, for now. Save this print definition. Then in the Define a Copy section, use the following settings: • Set page layout options: Leave these options at their default settings for now, unless you want to name a Back Overlay. • Define field mapping; Define constants; Define boxes; Define page segments: Leave these settings at their default settings. • Define overlays: Name your AFP overlay here. Appendix G. Advanced Print Utility implementation case study 355 You now have enough set up in APU to print one of your original SCS spooled files. Refer to “Manually Associating a Print Definition with a Spooled File” in Chapter 5 of the APU User’s Guide. Fill in the names of the print definition and the print definition library name (you may be able to select the default settings). In the Post processing SUCCESS/FAILURE fields, set each of these to *OUTQ, and name the output queues as APUOK and APUFAIL respectively. On the second panel, set the output queue name to that of your actual printer output queue (for example, PRT01). Now press Enter, and observe the bottom left of your screen. A sequence of equals signs and asterisks indicates the progress of the Apply Print Definition process. If a message tells you that the print definition was applied successfully, go to the output queue for your printer and observe the new AFP spooled file there. When this is printed, decide which, if any, of your variable text requires mapping into position and return to the APU Print Definition (Define field mapping) section. If the print definition was not successfully applied, check your job log as to the cause of the failure. The most common causes of failure are: • The original SCS spooled file was not in a RDY state (for example it was HLD or SAV) • The name of your print definition was not found or did not agree with the exact name of the original SCS spooled file • You do not have the APU print definition library in your library list If there was a problem, the original SCS spooled file should have been moved to the APUFAIL queue. You can move it back to the APUTEST queue, correct the problem as above, and re-run the test. If successful, the original SCS spooled file will have been moved to the APUOK queue, and you can again move it back to the APUTEST queue or simply use the APUOK queue as your source for further tests. The flowchart in Figure 261 on page 356 shows the possible results of APU processing. 356 IBM AS/400 Printing V Figure 261. APU processing flowchart G.6.1 Other common problems • Q. I can’t see my sample spooled file on the Select A Sample Spooled File screen A. Change the output queue to reflect the one you are working with (APUTEST for example) or change the User to *ALL, or your current user ID, or that of the person who produced the sample spooled file. You cannot change the default output queue or user ID for this screen. Note: If you can only see one sample spooled file in the list, but you know you have several in the test queue, this is a bug that can be fixed by using PTF SF62571. • Q. The APU process produced an AFP spooled file, which printed with my remapped text but with no AFP overlays, or printed in the wrong font. A. Ensure the library containing your AFP resources, fonts, overlays, page segments, is in your library list Start SCS spooled file Original SCS spooled file is moved to the APUOK queue APU processing ===***___ New AFP spooled file is produced, sent to the output queue Original SCS spooled file is moved to the APUFAIL queue APU processing OK? Yes Yes Stop Appendix G. Advanced Print Utility implementation case study 357 • Q. Some of my text was formatted correctly, but some is missing, and there are random characters on the page. A. The formatting you created in the “Define field mapping” section does not exactly match the underlying data. Unmapped data will still be printed. This unmapped data may be partial elements of your data, therefore the appearance of “random” characters! G.6.2 Viewing APU output You may find it convenient to view your APU-enhanced spooled files while developing and testing them. It’s also possible for this to be a low-cost means for users to view output instead of printing it. To do this, start the Operations Navigator component of Client Access/400 from your PC. This assumes you have already set up Operations Navigator within Client Access/400. Select Basic Operations and either Printer Output or Printers as preferred. These are the equivalents of the AS/400 WRKSPLF and WRKWTR commands. If you have a lot of spooled files or output queues, you will improve screen refresh performance by highlighting one of the above choices, and selecting Options->Include from the menu bar. Specify your preferred printer and output queues to filter the view. See Figure 262. Figure 262. Specifying preferred printer and output queues When you double-click on a spooled file, the AFP Viewer is invoked automatically and your output may be seen in a WYSIWYG view (see Figure 263 on page 358 for an example). This may save you several trips to the printer and a lot of paper! 358 IBM AS/400 Printing V Figure 263. AFP Viewer: Spooled file If you find that the AFP viewer cannot locate the AFP resources, check the settings of Options->Preferences->More->Resource Path. Ensure that you are pointing to an appropriate path, for example the network drive on the AS/400 system where the AFP resources were created. G.7 Automatically starting the APU Monitor This section provides advice about using the automated process of capturing SCS spooled files, applying the APU print definitions, and sending the new AFP spooled files to various destinations. It is intended that the APU Monitor batch process be started from the main APU menu (option 4), for example interactively. Many customers prefer that the job be automatically started along with their other jobs at system startup. In addition, there is an issue with the APU Monitor that, by default, it runs in QBATCH as a never-ending job (QYPUMON). If QBATCH has a limit on the number of active jobs (such as just 1), this will prevent other batch jobs from starting in QBATCH unless QYPUMON is ended. There are at least two ways of handling these issues: • Create an entirely new subsystem, just for the APU Monitor • Modify QBATCH to allow multiple jobs to run G.7.1 Creating a separate APU subsystem The following procedure creates a new subsystem for the APU Monitor. If the naming convention is followed, this procedure still allows you to view the APU Monitor status, and to stop and restart it from the main APU menu if required. 1. Create a new subsystem called APUMON by copying the QBATCH subsystem description: Appendix G. Advanced Print Utility implementation case study 359 CRTDUPOBJ OBJ(QBATCH) FROMLIB(QSYS) OBJTYPE(*SBSD) NEWOBJ(APUMON) 2. Remove the three default job queue entries from APUMON: RMVJOBQE SBSD(QSYS/APUMON) JOBQ(QGPL/QBATCH) RMVJOBQE SBSD(QSYS/APUMON) JOBQ(QGPL/QS36EVOKE) RMVJOBQE SBSD(QSYS/APUMON) JOBQ(QGPL/QTXTSRCH) 3. Create a job queue called APUMON in QSYS: CRTJOBQ JOBQ(QSYS/APUMON) TEXT('Job Q for APU Monitor') 4. Add a new job queue entry to the APUMON subsystem: ADDJOBQE SBSD(QSYS/APUMON) JOBQ(QSYS/APUMON) 5. Make a copy of the APU-supplied job description QYPUJOBD in library QAPU, place it somewhere convenient such as QSYS: CRTDUPOBJ OBJ(QYPUJOBD) FROMLIB(QAPU) OBJTYPE(*JOBD) TOLIB(QSYS) 6. Change QSYS/QAPUJOBD to refer to job queue QSYS/APUMON: CHGJOBD JOBD(QSYS/QYPUJOBD) JOBQ(QSYS/APUMON) PRTDEV(PRT01) The reference to PRT01 is useful if you know you will always be printing to a single printer (the system printer) or if you want to define a default printer device for APU jobs to use. 7. Modify the APU Defaults (option 6 from the main APU menu) to use the customized job description, for example QSYS/QYPUJOBD. Make sure you have the required font, code page, and APU print definition libraries in your library list first. Otherwise, you won’t be able to successfully exit option 6. 8. Test the new subsystem by starting it interactively: STRSBS(APUMON) Then start the APU monitor from the main APU menu, option 4. Test with an SCS spooled file placed on a monitored output queue. It should be picked up, and a print definition should be applied and printed. If the SCS spooled file is already on the output queue, it may be necessary to hold and release it to initiate the process. 9. End the APU monitor by selecting option 5 from the main APU menu, and end the APUMON subsystem: ENDSBS SBS(APUMON) OPTION(*IMMED) 10.Test the job to see if it can be started in batch by using the following command: STRSBS(APUMON) SBMJOB CMD(CALL PGM(QAPU/QYPUDQMN) JOB(QYPUMON) JOBD(QSYS/QYPUJOBD)) 11.Test the job again with an SCS spooled file. If it is successful, create a small CL program with the above two lines of code, and add the program to your startup procedures. Note: You could use any JOB name above, but if you use QYPUMON, you can then check the status of the APU Monitor from the main APU menu using option 3. Also note that you can stop and start the APU Monitor using options 4 and 5. The above is one method of automating the APU Monitor. It creates a totally separate subsystem that you can take down or bring up at will without disturbing 360 IBM AS/400 Printing V any other batch operations. Because it is called APUMON, it is more likely to appear at the top of the list in WRKACTJOB, which is convenient. Another method of automating the APU Monitor is to modify QBATCH itself. G.7.2 Modifying QBATCH to allow multiple jobs to run You could use either of the following commands: CHGJOBQE SBSD(QBATCH) JOBQ(QBATCH) MAXACT(n+1) MAXACT(*NOMAX) Here, n is the current number of maximum active jobs, and n+1 is simply your adding an extra job to the MAXACT number. There are several issues here. One is the performance implication of unlimited jobs running in QBATCH. Another is that if there is still a limit, QYPUMON may still be unable to run or may prevent another job from running. If you continue with this procedure, you must add the CL command (CALL PGM(QAPU/QYPUDQMN) from the procedure above to your startup CL programs. Instead of starting a separate APUMON subsystem though, you need an instruction that says, “Whenever the QSPL subsystem starts, I want QYPUMON to start as well”. This is called an auto-start job entry. Use a command similar to: ADDAJE SBSD(QSPL) JOB(QYPUMON) JOBD(QAPU/QYPUJOBD) G.8 Using APU for production printing Only when you have created some working APU print definitions and started to have them automatically applied through the APU Monitor, you will see how powerful the APU batch process is. This section describes case studies where various elements of AFP and APU have been exploited to meet real customer requirements. G.8.1 Using APU Monitor Actions An APU Monitor Action is a single or repeated application of your APU print definition to an SCS spooled file. You might use this process to: • Produce two copies of an AFP spooled file, sent to two or more different printers, perhaps in different locations • Perform some other form of output, for example to fax the AFP spooled file or send it to an archival system, as well as printing it • Store a copy of the AFP spooled file on an output queue for reprinting in case the output becomes damaged or spoiled • Route a non-AFP spooled file to a different printer or location The above list largely assumes you are generating multiple identical copies of the AFP spooled file. There is no reason why the AFP spooled files should not differ slightly. The following section describes how to setup the different APU Monitor Actions to realize the sample requirements presented previously. Appendix G. Advanced Print Utility implementation case study 361 G.8.1.1 Sending an AFP spooled file to multiple destinations Let’s suppose you want to print a formatted AFP report in two different locations. The data in the report is identical in all respects. You want to print the address of the receiving location at the top of the report. Obviously, this address will be different. Traditionally, the printers would have been loaded with pre-printed headed stationery to achieve this. Let’s assume you created an electronic overlay of just the address, called ADDRESS. You store this in a location-specific library, for example LONDON. The overlay for the second location is also called ADDRESS, but stored in a library called DUBLIN. The APU print definition is common to both locations, so you store that in your general-purpose library (for example APUDATA, along with any other general-purpose overlays: with lines, boxes, and shading for example). Finally, you add the libraries to the PSF configuration objects for the location printers. LONDPRT1 has libraries APUDATA and LONDON, and DUBPRT1 has libraries APUDATA and DUBLIN. The APU Monitor action looks like the display shown in Figure 264. Figure 264. APU Monitor action display The settings on the next page can be changed as required. Now press F15 (Next Action), and repeat the above settings except for the following two parameters: Run option . . ......... *REPRINT *NORMAL, *NOCOPY, *REPRINT Output queue . ......... DUBPRT1 Name, *DEV, *SPOOLFILE Note the use of the *REPRINT run option. This is very important from an AS/400 performance point of view. Since you have already created the AFP spooled file, there is no need to go through the processing of it again. Simply send it to the remote printer. The AFP spooled file is already “tagged” with a reference to use the Dublin address overlay. This is found in the printer’s Device Resource library list in the PSF configuration object. Define action for output spooled file Sequence . . . . . . : 100 Text . . . . . . . . : Send report to LONDON printer Action . . . . . . . : 1 / 1 Panel . . . . . . . . : 1 / 2 Type choices, press Enter. User exit before . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Print Definition . . . *SPOOLFILE Name, *SPOOLFILE, *NONE Library . . . . . . . *PRTDEFLIB Name, *PRTDEFLIB, *LIBL Run option . . . . . *NORMAL *NORMAL, *NOCOPY, *REPRINT User exit middle . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Output device . . . . . *JOB Name, *JOB Output queue . . . . . LONDPRT1 Name, *DEV, *SPOOLFILE Library . . . . . . . *LIBL Name, *LIBL More... F12=Cancel F15=Next action 362 IBM AS/400 Printing V The above approach also has benefits in maintaining the forms. If and when the location telephone or fax number changes, you simply make one small change to one overlay object. No other important parts of the print production are affected. Conversely, if the company decides on a change to the font or lines/boxes on the Invoice overlay, the main Invoice overlay can be changed, and the updated result will be in immediate effect at all locations, local and remote. G.8.1.2 Sending (slightly different) AFP spooled files to multiple destinations You should be able to see that the above example may easily be extended to placing the AFP spooled file copies on output queues that actually point to other devices, such as a fax output queue or an output queue monitored by an archiving process. You could enhance the process slightly and request that the faxed AFP spooled file contains a fax message along the lines of “This is your faxed copy; a printed confirmation will be with you in 24 hours”. We can easily generate an overlay to convey this message. However, the addition of this overlay is no longer location-specific (London/Dublin), but action-specific (print and fax? or just print?). Let’s assume that to generate a combined faxed/printed document, the application places the SCS spooled file on a specific output queue, called FAXPRINT (this could also be a manual process). You have a print definition called INVOICE, in library APUDATA. You also have a copy of this print definition, called INVOICEF, which is the same print definition but with the “Fax Message” overlay described above included in all the page formats. There are two keys to make this work. First, you monitor the FAXPRINT output queue in Define Selection for Input Spooled File. Second, having made a successful selection, you have two APU Monitor actions as before, in Define Action for Output Spooled File. The difference this time is that, as well as different printer output queues (one of them being the fax queue LONDFAX), the second APU Monitor Action specifies a different print definition name as shown in Figure 265 through Figure 267. Figure 265. APU Monitor Action: Specifying a different print definition Define selection for input spooled file Sequence . . . . . . : 110 Text . . . . . . . . : Send report to LONDON fax queue & printer Type choices, press Enter. File . . . . . . . . . *ALL Name, Generic*, *ALL Output queue . . . . . FAXPRINT Name, Generic*, *ALL Library . . . . . . . *LIBL Name, *LIBL User . . . . . . . . . *ALL User, Generic*, *ALL User Data . . . . . . . *ALL User Data, Generic*, *ALL Form Type . . . . . . . *ALL Form Type, Generic*, *ALL Program . . . . . . . . *ALL Name, Generic*, *ALL Library . . . . . . . Name, *LIBL Appendix G. Advanced Print Utility implementation case study 363 Figure 266. Define action for output spooled file (Part 1 of 2) Figure 267. Define action for output spooled file (Part 2 of 2) Note the different Run option. The *NOCOPY name is a little misleading. The “no copy” refers to the internal process of copying the original SCS spooled file and, in this case, no internal copy is required. What actually happens is that only some re-processing is necessary, for example the application of a different print definition. Define action for output spooled file Sequence . . . . . . : 110 Text . . . . . . . . : Send report to LONDON fax queue & printer Action . . . . . . . : 1 / 1 Panel . . . . . . . . : 1 / 2 Type choices, press Enter. User exit before . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Print Definition . . . *SPOOLFILE Name, *SPOOLFILE, *NONE Library . . . . . . . *PRTDEFLIB Name, *PRTDEFLIB, *LIBL Run option . . . . . *NORMAL *NORMAL, *NOCOPY, *REPRINT User exit middle . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Output device . . . . . *JOB Name, *JOB Output queue . . . . . LONDPRT1 Name, *DEV, *SPOOLFILE Library . . . . . . . *LIBL Name, *LIBL More... F12=Cancel F15=Next action Define action for output spooled file Sequence . . . . . . : 110 Text . . . . . . . . : Send report to LONDON fax queue & printer Action . . . . . . . : 2/2 Panel . . . . . . . . : 1 / 2 Type choices, press Enter. User exit before . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Print Definition . . . INVOICEF Name, *SPOOLFILE, *NONE Library . . . . . . . *PRTDEFLIB Name, *PRTDEFLIB, *LIBL Run option . . . . . *NOCOPY *NORMAL, *NOCOPY,*REPRINT User exit middle . . . *NONE Name, *NONE Library . . . . . . . Name, *LIBL User parameter . . . Value Output device . . . . . *JOB Name, *JOB Output queue . . . . . LONDFAX Name, *DEV, *SPOOLFILE Library . . . . . . . *LIBL Name, *LIBL More... F12=Cancel F14=Previous action F15=Next action 364 IBM AS/400 Printing V For a summary, see Table 42. Table 42. APU action and Run option summary G.8.1.3 Saving a copy of the AFP spooled file You may decide to keep a copy of the AFP spooled file for 24 hours to guard against the printed documents being spoilt (for example being torn in a mailing machine). You could keep a copy of the original SCS spooled file, but you would then have to go through the APU processing again. To do this, simply create additional output queues, for example LONDPRT1S (“S” for “save”). Name this queue in the last APU Monitor Action, with a Run option of *REPRINT and Save *YES. You could devise a manual or automatic process for clearing down the output queue daily, using the CLROUTQ command. G.8.1.4 Routing non-AFP spooled files through APU As a bonus, the APU Monitor Actions provide a reasonable method of spooled file re-routing. Suppose there is the possibility that users might send spooled files ineligible for AFP processing to the printer, for example a screen print. You need to handle these cases (referred to as a “drop-through” because the spooled file does not meet any of the APU Monitor Action criteria and “drops through” the list of actions to the end). To do this, add an action entry near the end of the list to capture these cases. It is likely that the spooled file name or the output queue will be generic, with use of the “*” wildcard. On the third Action Entry (Define Action for Output Spooled File), the print definition name is set as *NONE. That is, no APU print definition will be applied, and no AFP spooled file will be created. The re-routing is done in the second Action Entry (Define action for input spooled file). In the Success field, enter the name of the desired target output queue. See Figure 268 for an example. Figure 268. Define action for input spooled file: Success field If you have: Use Run option: Only one APU Monitor Action *NORMAL Second, or subsequent Action, same print definition *REPRINT Second, or subsequent Action, different print definitions *NOCOPY Define action for input spooled file Sequence . . . . . . : 900 Text . . . . . . . . : Drop through for screen prints Type choices for input spooled file after successful or failed processing respectively, press Enter. Success . . . . . . . . *OUTQ *NONE, *HOLD, *DELETE, *OUTQ Output queue . . . . QPRINT Name Library . . . . . . *LIBL Name, *LIBL Failure . . . . . . . . *HOLD *NONE, *HOLD, *DELETE, *OUTQ Output queue . . . . Name Library . . . . . . Name, *LIBL Appendix G. Advanced Print Utility implementation case study 365 G.9 Documentation It is important to define a good naming convention for all your AFP resources, print definitions, libraries, and so on, from the beginning. Remember that OS/400 is much more limited in its names than Windows. For example, an overlay name is restricted to eight characters. G.9.1 Documenting APU component names Items that you should record include: • APU Defaults (option 6 from the main APU menu) • APU Print Definition names and libraries Page Format names and copy names • APU Print Definition Attributes • AFP resource names and libraries: – Overlays – Page segments – Fonts – Page segments and fonts used within an overlay • Source document names and location if not on the AS/400 system • APU Print Definition page format selection rules • APU Monitor Action Entries • Any special notes about the application A working spreadsheet is very useful to have alongside you while creating the documents. It then becomes a valuable documentation source for the completed project. See Table 43 for an example. Table 43. Working spreadsheet example The example in Table 43 shows only a suggestion for the column headings. For example, if you were also using AFP Utilities/400 to create the overlays, you might have a column to indicate their names and locations. The example also shows another column for any page segments used within the AFP Utilities overlays. IBM AFP Naming Conventions - London Spool File Number Print Definition Name APU Page Formats APU Copies Overlays AS/400 Library APU Monitor Steps WinNT Path for source overlay documents Notes LONDON INVOICES INV694000 INV694000 INVFIRST CLIENT INVFIRST APUDATA 10 N:\AFP\INVOICE\INVFIRST.DOC Picked from Bin 1 (plain paper) INVBAC N:\AFP\INVOICE\INVBAC.DOC INVBAC prints on reverse side OFFICE INVFIRST Picked from Bin 2 (yellow paper, pre-punched) OFFICE N:\AFP\INVOICE\OFFICE.DOC OFFICE overlay prints on front side INVANY CLIENT INVANY N:\AFP\INVOICE\INVANY.DOC Picked from Bin 1 (plain paper) INVBAC INVBAC prints on reverse side OFFICE INVANY Picked from Bin 2 (yellow paper, pre-punched) OFFICE OFFICE overlay prints on front side 366 IBM AS/400 Printing V A separate page in the spreadsheet could record the APU Monitor Action steps. See Table 44 for an example. Table 44. Spreadsheet recording APU Monitor Action steps Such spreadsheets are valuable only if they are kept up-to-date! Much of the required information may be printed directly from APU, using option 5 (Display contents) or option 6 (Print contents) from the Work with Print Definitions menu in APU. Note that option 6 may generate many pages. It is usually better to copy and paste the required information into a spreadsheet or other PC document. G.9.2 Where APU print components are stored For the purposes of backup or transfer to another system, Table 45 records how and where the main APU components are stored on the AS/400 system. Table 45. APU component storage information IBM AFP APU Monitor Actions Action Action Selection for Input Spooled File Action for Input Spooled File Action #1 for Output Spooled File No. Name SPLF name OUTQ USER Success OUTQ Failure OUTQ Print Definition Run opt Device OUTQ Hold Save Outbin 10 London Invoices INV694000 PRT01A *ALL *HOLD n/a *OUTQ APUFAIL APUDATA\INV694000 *NORMAL PRT01 PRT01 *NO *YES *DEVD APU component OS/400 object type Object attribute Object name Library APU print definition *USRSPC APUPRTDEF User-defined User-defined APU Monitor Action Rules *FILE PF QAYPUMA0 QUSRSYS APU fonts database *FILE PF QAYPUFN0 QUSRSYS © Copyright IBM Corp. 2000 367 Appendix H. AS/400 to AIX printing There are a number of ways of sending AS/400 spooled files to an Infoprint Manager for AIX server. Each one has different advantages depending on a variety of considerations, such as the data stream type of the spooled file and the supported target printer, number and diversity of applications and printers, customer preference, and available programming skills. This appendix attempts to provide guidelines to the different approaches, when they could be used, and additional tips. This appendix has been written from the view point of an AS/400 user, and assumes that an Infoprint Manager for AIX specialist is available. H.1 TCP/IP versus SNA There are basically two diverse ways of using Infoprint Manager for AIX as a server for AS/400 printing. Sending files to the server over TCP/IP allows you to take advantage of many of the features of Infoprint Manager for AIX to manage your output, such as queue management, printer pooling, and sharing printers with other clients. PSF Direct, over an SNA connection, allows the AS/400 system to use the Infoprint Manager for AIX connected printer as if it were attached to the AS/400 system directly. H.1.1 Sending spooled files using TCP/IP The TCP/IP command to send a spooled file from the AS/400 system to Infoprint Manager for AIX is LPR. The AS/400 system has an alias for LPR, which is SNDTCPSPLF. These two commands are equivalent. You can use either command directly on the command line or in a CL program, or indirectly by setting up a remote output queue. H.1.1.1 Remote Output Queue Figure 269 on page 368 shows an example of creating a Remote Output Queue. In this particular example, all spooled files are of DEVTYPE(*AFPDS). No transformation needs to happen to these types of files. 368 IBM AS/400 Printing V Figure 269. Remote Output Queue creation example The parameter descriptions shown in Figure 269 are explained here: • OUTQ: Give the output queue on the AS/400 system a meaningful name that corresponds to the destination on the Infoprint Manager for AIX system. • RMTSYS: Remote System. This is the system name of the Infoprint Manager for AIX server. Enter the actual name here, and then add the AIX system’s host name to the AS/400 host table using the AS/400 CFGTCP option 10. Alternately, you can enter the value *INTNETADR, and then use the INTNETADR field to specify the address directly. If the name is in lower case, enclose it between single quotes. • RMTPRTQ: Remote Printer Queue. This corresponds to the Infoprint Manager for AIX Logical Destination (not the Infoprint Manager for AIX queue). If the name contains lower case, enclose it between single quotes. Create Output Queue (CRTOUTQ) Output queue . . . . . . . . . . OUTQ > IP60AIX Library . . . . . . . . . . . > QUSRSYS Maximum spooled file size: MAXPAGES Number of pages . . . . . . . *NONE Starting time . . . . . . . . Ending time . . . . . . . . . + for more values Order of files on queue . . . . SEQ *FIFO Remote system . . . . . . . . . RMTSYS > 'INFOPRNT' Remote printer queue . . . . . . RMTPRTQ > 'IP60-l' Writers to autostart . . . . . . AUTOSTRWTR > 1 Queue for writer messages . . . MSGQ QSYSOPR Library . . . . . . . . . . . *LIBL Connection type . . . . . . . . CNNTYPE > *IP Destination type . . . . . . . . DESTTYPE > *OTHER Host print transform . . . . . . TRANSFORM > *NO Manufacturer type and model . . MFRTYPMDL *IBM42011 Workstation customizing object WSCST *NONE Library . . . . . . . . . . . Image configuration . . . . . . IMGCFG *NONE Internet address . . . . . . . . INTNETADR > Destination options . . . . . . DESTOPT > '-odatat=afpds' Print separator page . . . . . . SEPPAGE *YES User defined option . . . . . . USRDFNOPT *NONE User defined object: USRDFNOBJ Object . . . . . . . . . . . . *NONE Library . . . . . . . . . . Object type . . . . . . . . . User driver program . . . . . . USRDRVPGM *NONE Library . . . . . . . . . . . Spooled file ASP . . . . . . . . SPLFASP *SYSTEM Text 'description' . . . . . . . TEXT > 'OutQ to send AFPDS to IP60 attached to IPM for AIX' Display any file . . . . . . . . DSPDTA *NO Job separators . . . . . . . . . JOBSEP 0 More Operator controlled . . . . . . OPRCTL *YES Data queue . . . . . . . . . . . DTAQ *NONE Library . . . . . . . . . . . Authority to check . . . . . . . AUTCHK *OWNER Authority . . . . . . . . . . . AUT *USE Appendix H. AS/400 to AIX printing 369 • CNNTYPE: Connection Type. Must be specified as *IP. • DESTTYP: Destination Type. Must be specified as *OTHER. • TRANSFORM: Host print transform. This parameter determines whether the AS/400 spooled file is sent as is or is translated to ASCII. For example, *AFPDS spooled files are not transformed. Spooled files that are *SCS will need to be transformed. This will be discussed further in H.2, “AS/400 spooled file data streams” on page 372. • MFRTYPMDL: Manufacturer Type and Model. If you specify TRANSFORM(*YES), use this parameter to specify how the transform is to take place. If you are using an IBM supplied transformation, you would enter the name here, such as *IBM4332 or *HP5SI. If you create a Workstation Customization Object, enter the value *WSCST here, and use the next parameter to name the object and its library. • WSCST: Workstation Customizing Object. Use this entry to name your own Workstation Customization Object. • DESTOPT: Destination Options. This parameter allows you to specify some of the processing options to Infoprint Manager for AIX. Enclose the options in single quotes. Details on using the DESTOPT are in H.4.4, “Destination Options” on page 381. All files that arrive in this queue will be sent to the same Infoprint Manager for AIX Logical Destination and will have the same Destination Options. This method can be used when you have a limited number of destinations with a limited variety in how each file will be handled. H.1.1.2 SNDTCPSPLF command (LPR) For greater flexibility in how each file is handled, use the SNDTCPSPLF command and specify the Remote Printer Queue, Destination Options and other parameters as appropriate. Section H.3.3, “Output queue monitor” on page 377, covers how to build a monitor application to automate the selecting of spooled files and setting the parameters for the SNDTCPSPLF command. Figure 270 on page 370 shows an example of the SNDTCPSPLF command. In this particular example, the spooled file to be sent is DEVTYPE(*SCS). It will be transformed to “flat ASCII” using host print transform and a custom Workstation Customization Object. See H.4.1, “Processing line AS/400 SCS files as ‘flat ASCII’” on page 378, for more information on processing “flat ASCII”. The destination options name the form definition to be used and a file on the AIX system that contains additional processing instructions. 370 IBM AS/400 Printing V Figure 270. SNDTCPSPLF or LPR command screen Using RMTSYS, RMTPRTQ, DESTTYP, TRANSFORM, MFRTYPMDL, WSCST, and DESTOPT is the same as using the remote output queue described in H.1.1.1, “Remote Output Queue” on page 367. • FILE: Spooled file. The name of the spooled file to send. • JOB: Job name/user/number. Specify the three components of the Job identifier. In an interactive environment, these values can be retrieved from the WRKSPLF or WRKOUTQ panels and either pressing F11 to see the appropriate view or entering an 8 to view the spooled file attributes. For an automated batch process, see H.3.3, “Output queue monitor” on page 377, for a discussion on the DTAQ (Data Queue) parameter in an Output Queue Description. • SPLNBR: Spooled file number. If there is only one spooled file of a given name in the Job, you can specify *ONLY. Otherwise, specify the exact number or *LAST. H.1.2 PSF Direct PSF Direct provides a direct-print connection between an MVS, VSE, VM, or AS/400 system and a printer defined to IBM Infoprint Manager for AIX. PSF Direct gives you control of key print processes from your AS/400 system. An Infoprint actual destination appears to be directly attached to your AS/400 system. Jobs print without delay because they are not spooled by the Infoprint Manager server. Because the AS/400 controls the print process, it returns job-completion and error messages to the AS/400 systems operator. To use PSF Direct, you need the IBM Communications Server for AIX to communicate between the AS/400 system and AIX. You create printer and APPC definitions on the AS/400 system so that print jobs can be directed to the Infoprint Manager for AIX printer. All spooled data on the AS/400 system is converted to IPDS before being sent to the server. Send TCP/IP Spooled File (SNDTCPSPLF) or Send TCP/IP Spooled File (LPR) Remote system . . . . . . . . . RMTSYS 'INFOPRNT' Printer queue . . . . . . . . . PRTQ 'IP60-l' Spooled file . . . . . . . . . . FILE QSYSPRT Job name . . . . . . . . . . . . JOB DSP01 User . . . . . . . . . . . . . MIRA Number . . . . . . . . . . . . 013140 Spooled file number . . . . . . SPLNBR *ONLY Destination type . . . . . . . . DESTTYP *OTHER Transform SCS to ASCII . . . . . TRANSFORM *YES User data transform . . . . . . USRDTATFM *NONE Library . . . . . . . . . . . Manufacturer type and model . . MFRTYPMDL *WSCST Internet address . . . . . . . . INTNETADR Workstation customizing object WSCST FLATASCII Library . . . . . . . . . . . QUSRSYS Delete file after sending . . . DLTSPLF *NO Destination-dependent options . DESTOPT '-of=F1STD -odatat=line -oparmdd=/u/afpres/parmstd132' Print separator page . . . . . . SEPPAGE *YES Appendix H. AS/400 to AIX printing 371 The printer is defined to IBM Infoprint Control on AIX. A host receiver on AIX passes the IPDS from the AS/400 system to a secondary print process, depending on the connection type and data stream of the destination printer. If the target printer uses the PCL or PPDS data streams, this process will perform the appropriate translation. After PSF Direct is configured, users or applications can use normal print submission processes to send AS/400 spooled files to the Output Queue corresponding to the PSF Direct printer. PSF/400 automatically directs the output to the PSF Direct server. Only one host can print to a given device at a time using a PSF Direct. The session needs to be ended or released before you can use the printer for a PSF Direct session from another mainframe or from IBM Infoprint Control. On the AS/400 system, you can use the timer values in the PSF Configuration Object to automatically release the writer from one system so another can use the printer. See the appropriate version of AS/400e Printer Device Programming, for more details on sharing IPDS printers. Other PSF hosts have similar timers. See the documentation for each product respectively. The differences between PSF Direct, using SNA, and printing over TCP/IP are illustrated in Table 46. Table 46. Differences between PSF Direct, using SNA, and printing over TCP/IP Function TCP/IP printing to Infoprint Manager for AIX PSF Direct Resources Must reside on the Infoprint Manager for AIX server. Reside on AS/400 AS/400 Spooled file types supported. *SCS (using HPT), *AFPDS, *USERASCII *SCS, *IPDS, *AFPDS, *LINE, *AFPDSLINE Output Printer Data Streams supported Any data stream supported by Infoprint Manager for AIX: IPDS, PCL, PPDS, PostScript (PostScript would only work if it is generated by a user program as a *USERASCII spooled file.) IPDS, PPDS, PCL4, PCL5, PCL5c Sharing Multiple systems may send output to the same printer at the same time. Infoprint Manager for AIX will print according to queue definitions. Only one Host may print to the printer at one time. Sharing can be set up on a time-out basis using PSFCFG. Data Stream Conversions *SCS to “flat ASCII” or PCL is done using HPT on the AS/400 system. This must be explicitly defined in the Remote Output Queue or the SNDTCPSPLF command. All other conversions are done on Infoprint Manager for AIX. All file types are automatically converted to IPDS by PSF/400 before being sent to the Infoprint Manager server. 372 IBM AS/400 Printing V You may want to consider PSF Direct if you do not need dynamic switching between hosts. For example, you print AS/400 “batch” jobs at night using PSF Direct, and during the day, the printer is used by other users. PSF Direct allows you to send *SCS spooled files without conversion to ASCII, and *LINE or *AFPDSLINE, which would not work at all over TCP/IP. There is currently no single document that offers specific setup instructions for using PSF Direct for AIX with an AS/400 system. The IBM Infoprint Manager for AIX PSF Direct Network Configuration Guide for System/370, S544-5486 has information on configuration SNA and the Host Receiver on the AIX system. For the AS/400 system, refer to the configuration samples for SNA printing to PSF/2 in the IBM AS/400 Printing III, GG24-4028. Additional information on the AS/400 configuration for PSF Direct can also be found in the IBM Infoprint Manager for Windows NT. The PSF Direct: AS/400 Configuration manual written for Infoprint Manager for Windows NT. This manual can be found online at: http://www.printers.ibm.com/R5Psc.nsf/web/ntpsfd H.2 AS/400 spooled file data streams The following sections describe the different data streams that can be created as AS/400 spooled files. They also explain how they can be sent to and printed on an Infoprint Manager for AIX server. H.2.1 *SCS The default data stream on the AS/400 system is known as SNA Character Stream (SCS). This is an EBCDIC data stream with a minimum of control characters for setting LPI and CPI, for example. This is the data stream generated by system applications such as screen prints, compile listings, job logs, or queries. Many packages from AS/400 software vendors generate SCS. Infoprint Manager for AIX does not support processing SCS spooled files. You would have to perform one of the following actions to handle them: PSF/400 required PSF/400 is not required if all AFP printing is done at the server. Yes Queue Management Infoprint Manager for AIX panels or Java GUIs. Done using AS/400 commands. Message handling (for example, a paper jam) Infoprint Manager for AIX panels or Java GUIs; can interface with Network Printer Manager tool for supported printers Messages are sent to AS/400 Systems Operator Communication protocol between AS/400 and Infoprint Manager for AIX. TCP/IP SNA LU6.2 (printer may be connected to Infoprint Manager for AIX using TCP/IP, Channel, or parallel) Function TCP/IP printing to Infoprint Manager for AIX PSF Direct Appendix H. AS/400 to AIX printing 373 • Convert the application to generate *AFPDS. • Convert the SCS spooled files to “flat ASCII” and then apply a form definition and page definition to format the data. • Convert the SCS spooled file to PCL. • Use PSF Direct. H.2.1.1 Converting to *AFPDS If you have access to the original application or the printer file for the application, you can change or override the printer files to generate *AFPDS spooled files, which are supported on Infoprint Manager for AIX. Another option is Advanced Print Utility (APU), a part of PrintSuite for AS/400. APU is designed to re-engineer simple SCS output into sophisticated fully graphical AFP pages. It could be used to convert AS/400 *SCS spooled files to *AFPDS without needing to change the original application. H.2.1.2 Converting to ‘flat ASCII’ and add form and page definitions You can use host print transform with a default Workstation Customization object to send *SCS files as “flat ASCII” to Infoprint Manager for AIX. The instructions on how to create the *WSCST are explained in H.4.1.1, “WSCST for ‘flat ASCII’” on page 378. The EBCDIC characters are converted to ASCII, and all control codes are removed except Carriage Return, Line Feed, and New Page. This works best if the applications were generated using Program Defined Printer Files. Externally Defined Printer Files work, but you will lose any controls such as LPI or CPI changes. To print this file correctly on Infoprint Manager for AIX, the data will have to be matched up with the appropriate form definition and page definition. This can be done using Default Documents on the Infoprint Manager for AIX side, or using the Destination Options of the SNDTCPSPLF Command or Remote Output Queue on the AS/400 system. These alternatives are described in greater detail in the following section. H.2.1.3 Converting to PCL In your Remote Output Queue or SNDTCPSPLF command, you can indicate that PCL is to be generated by specifying a Manufacturer Type and Model, such as *IBM4332. This is probably the easiest from a programming point of view, and is most appropriate if the target printer is a PCL printer. If that is the case, you may even choose to print these file to the PCL printer without any additional conversions on Infoprint Manager for AIX. Along with the usual restrictions of host print transform there are other points to consider: • If the application references printer resident fonts, a font mapping is done, which may or may not match your original document. • If the spooled file references the front or back overlay, they will not be included. One overlay per document can be added back in using the Destination Options. See H.4.4.2, “Overlays with the SCS file” on page 381. • Some users are not satisfied with the results of PAGRTT(*COR) or (*AUTO) when using host print transform, because it defaults to 15 cpi instead of 13 cpi. • If the target printer is ultimately an IPDS printer, this method means you will be translating the spooled file twice, with more chances of fidelity being lost. 374 IBM AS/400 Printing V • User exit programming may be required on Infoprint Manager for AIX to support multiple drawer selections in PCL. • Finally, the PCL data stream generated is likely to take much more bandwidth than the corresponding AFPDS or “flat ASCII” file generated using the other two methods listed above. H.2.1.4 PSF Direct *SCS spooled files can be sent to an Infoprint Manager for AIX printer using PSF Direct. The spooled files are translated to IPDS by PSF/400. H.2.2 OV/400 and Final Form Text Extensions to the SCS data stream, called Final Form Text Document Content Architecture, are used in generating the output of Office Vision/400 (OV/400). There are more controls supported, such as for font selection, line justification, and the ability to include IOCA images. These files cannot be sent “as is” to Infoprint Manager for AIX over TCP/IP. If the file is converted to “flat ASCII”, all formatting controls will be lost. Unless they were extremely predictable, the document cannot likely be recreated using page and form definitions. One option is to convert OV/400-generated spooled files to PCL. The same restrictions described under *SCS apply. OV/400 documents can be printed on Infoprint Manager for AIX using PSF Direct. Note: Support for OV/400 will end in May 2001. H.2.3 *AFPDS AFPDS can be generated in a number of ways, including: • The printer file used by a high level program or system application can be created (or changed or overridden) to use DEVTYPE(*AFPDS). • APU can be used to convert existing SCS spooled files to AFPDS. • AFPU/400 (Advanced Function Printing Utilities/400) has a component called Print Format Utility that can generate AFPDS spooled files. • ERP applications, such as J. D. Edwards OneWorld, can create AFPDS directly from the line-of-business application programs. • Third-party applications, such as Doc/1 and Custom Statement Formatter, create AFPDS directly. • PostScript and image print files can be transformed by Image Print Transform (a component of host print transform) into AFPDS. • ImagePlus and Facsimile Support/400 and other image products produce MODCA-P, which is equivalent to AFPDS. For the most part, AFPDS spooled files can be sent to Infoprint Manager for AIX over TCP/IP for printing. Use TRANSFORM(*NO) in the Remote Output Queue or the SNDTCPSPLF command. AFP resources will need to be moved to the server and placed in appropriate directories. There is one very important exception. Many printer files take advantage of Computer Output Reduction (COR) on the AS/400 system, either explicitly with Appendix H. AS/400 to AIX printing 375 PAGRTT(*COR) or implicitly with PAGRTT(*AUTO). This includes most system supplied printer files as well as the output generated by many user or vendor programs. The idea is to take output that was normally formatted for the large paper supported on line printers and reduce and rotate it to fit on the smaller paper used by cut-sheet laser printers. Neither *COR nor *AUTO is supported by Infoprint Manager for AIX. If you simply take your SCS printer file and change it to create AFPDS, you will not see the same results when printing through Infoprint Manager for AIX as printing on the AS/400 system. To compensate for this, you would have to explicitly specify in the printer file PAGRTT(90 or 270), FRONT & BACKMGN (.5 .5), FNTCHRSET (a 13 cpi font such as C0D0GT13 or C0620090), and LPI (8 or 9) to have similar results. You can print all *AFPDS spooled files using PSF Direct. PAGRTT(*AUTO) and (*COR) will be supported. External resources will be managed from the AS/400 system and do not need to be manually transferred to Infoprint Manager for AIX server. H.2.4 *IPDS A version of IPDS that is specific to some of the older twinax IPDS printers, such as 3812 and 4224 may be generated on the AS/400 system, and printed without using PSF/400 to some printers. It is not a full implementation of that data stream. Some of the features supported in this data stream are barcodes, printer resident fonts, and embedded IOCA images. Overlays, page segments, host fonts, and other AFP resources are not supported. This subset of IPDS data stream is not supported on Infoprint Manager for AIX. You cannot send these files to the server that is using TCP/IP. The applications would have to be changed to generate *AFPDS. You can use PSF Direct to send *IPDS spooled file to the printer via Infoprint Manager for AIX since they are converted to full IPDS by PSF/400. H.2.5 *LINE or *AFPDSLINE PSF/400 has supported *LINE and *AFPDSLINE (or Mixed) data streams for quite some time. Only recently, it could be generated by standard programming techniques using printer files. Form definitions and page definitions are used to format these types of files. Although Infoprint Manager for AIX also supports Line and Mixed data streams, you cannot send the AS/400 files to Infoprint Manager for AIX using TCP/IP, since the AS/400 system adds some control characters between records, and these are not recognized by Infoprint Manager for AIX. You can use PSF Direct to send *LINE or *AFPDSLINE to printers via Infoprint Manager for AIX as they are converted to *IPDS by PSF/400. H.2.6 *USERASCII OS/400 does not explicitly generate spooled files that contain ASCII data streams. However, user or vendor applications may generate spooled files that contain ASCII. The AS/400 system does no checking on the validity of the content of those files. Some of the third-party packages use this capability to generate PCL or PostScript. Client Access/400 allows you to generate ASCII data streams on a PC client using an ASCII driver. This output can be placed on the AS/400 Output Queue transparently. 376 IBM AS/400 Printing V Spooled files that contain *USERASCII may be sent over TCP/IP to Infoprint Manager for AIX. Use TRANSFORM(*NO) when sending these files using TCP/IP. PSF Direct cannot be used to send *USERASCII files to Infoprint Manager for AIX. H.3 Automating the process Depending on the complexity and variety of the applications, there are a few different ways to automate the process of sending the spooled files over TCP/IP and selecting the correct transformation options and processing resources. H.3.1 Default Document If all your spooled files use a very limited number of printing characteristics such as data stream type, form and page definitions, etc., you can set up a Logical Destination and Default Document on Infoprint Manager for AIX. In the default document, you name the AFP resources, and you set up the Logical Destination to use that Default Document. On the AS/400 side, you would direct those files needing those resources to that Logical Destination. If you are using AS/400 Remote Output queues, you would need one queue for each Logical Destination. Assume most of your output from your AS/400 system consists of system generated SCS spooled files that are 132 columns by 66 lines. These are going to be converted to “flat ASCII”, and a form and page definition will be used to format the page. The chain of definitions might look something like the AS/400 Remote Output Queue shown in Figure 271. Figure 271. AS/400 Remote Outut Queue On Infoprint Manager for AIX, you would have a Logical Printer that references a Default Document: Logical-Printer-Name = STD132-l Default-Document = STD132-dd The Default Document that looks similar to the example in Figure 272 would be created to define the formatting options. CRTOUTQ OUTQ(STD132) RMTSYS(INFOPRINT) RMTPRTQ('STD132-l') CNNTYPE(*IP) DESTTYPE(*OTHER) MFRTYPMDL(*WSCST) WSCST(FLATASCII) TEXT('Remote outq for logical destination STD132-l') Appendix H. AS/400 to AIX printing 377 Figure 272. Default document example H.3.2 Destination options in the remote output queue Another approach for the one or few formatting combinations is to hard code the appropriate parameters in the Destination Options (DESTOPT) of a Remote Output queue on the AS/400 system. For the same example, you could use the parameters shown in Figure 273. Figure 273. Destination Options (DESTOPT) of a Remote Output Queue on the AS/400 system In the above two methods, you would need one Infoprint Manager for AIX Logical Destination and one AS/400 Remote Output queue for each different application being sent to each printer. For example, if you have two printers and three applications, you would have to set up six AS/400 Remote Output Queues and six Logical Destinations on Infoprint Manager for AIX. For more information, see H.4.4, “Destination Options” on page 381. H.3.3 Output queue monitor The final method is to use build an output queue monitor application that watches for files arriving on AS/400 output queues, and then builds the Destination Option string and sets other SNDTCPSPLF parameters on the fly. The parameter, DTAQ, in the create or change output queue command, allows you to name a data queue. Any time a spooled file is placed in that output queue in a RDY state, or its state changes to RDY, a record is written to the Data Queue with information about that file. A monitor program is set up to receive the data queue records, and takes appropriate action for the spooled file it references. Depending on the situation, you may need to use a combination of the following elements in your monitor application: Default-document-name = STD132-dd Document-format = line-data Resource-context-font = /usr/lpp/afpfonts Resource-context-form-definition = /usr/lpp/psf/fontlib Resource-context-overlay = /usr/lpp/psf/fontlib Resource-context-page-definition = /usr/lpp/psf/fontlib Form-definition = F1STD132 Convert-to-ebcdic = true page-definition = P1STD132 Carriage-control-type = ansi-ascii Input-exit = /usr/lpp/psf/bin/asciinpe CRTOUTQ OUTQ(STD132) RMTSYS(INFOPRINT) RMTPRTQ('STD132-l') CNNTYPE(*IP) DESTTYPE(*OTHER) MFRTYPMDL(*WSCST) WSCST(FLATASCII) DESTOPT('-of=F1STD132 -odatat=line -oparmdd=/u/afpres/parmstd132’) TEXT('Remote outq for logical destination STD132-l') 378 IBM AS/400 Printing V • Lookup tables or files. These can be used to match up the name of the original AS/400 output queue to the target Infoprint Manager for AIX logical destination name, or match up the AS/400 spooled file name or other attribute with a Destination Options string. • Calls to system API QUSRSPLA to Retrieve Spooled File Attributes. The information retrieved includes information about the spooled file, such as data stream type, page size, overlay name. For more information, see AS/400 System API Reference, SC41-3801 or SC41-5801. A combination of CL and RPG (or other language) may be needed. Along with the monitor program, a robust system may need some house keeping functions such as error checking and table maintenance. If there is a problem and the monitor needs to be ended, spooled files may have to be held and released in order to put a record back in the data queue. H.4 Special considerations The following sections cover several special considerations that you may encounter in your specific implementation of AS/400 to AIX printing. H.4.1 Processing line AS/400 SCS files as ‘flat ASCII’ There may be times when the you choose to convert the existing AS/400 SCS spooled files to “flat ASCII” and then format them with form and page definitions when they arrive at the Infoprint Manager for AIX server. “Flat ASCII” refers to a simple ASCII file that contains only text and simple line and page controls. The basic steps are: 1. Create a WSCST that converts the spooled file to “flat ASCII”. 2. Create form and page definitions on Infoprint Manager for AIX. 3. Create a “parmdd” file with parameters for the Infoprint Manager for AIX line2afp program. The line2afp program processes the line data against the form and page definition, producing a fully resolved AFPDS file. Line2afp is an alias for ACIF, AFP Conversion, and Indexing Facility. 4. Send the spooled file from the AS/400 system to Infoprint Manager for AIX using the SNDTCPSPLF command, or use a Remote Output Queue. You must specify: a. TRANSFORM(*YES) b. MFRTRPMDL(*WSCST) c. WSCST(FLATASCII) 5. Specify Destination Options using the DESTOPT parameter as required, or use a Default Document on Infoprint Manager for AIX. H.4.1.1 WSCST for ‘flat ASCII’ Here is an example of the source for a Workstation Customization Object used to convert simple SCS spooled files to ASCII. As you can see, only a few of the original SCS controls are converted. Any other controls are dropped. Contrast this to the sample WSCST for IBM4039HP shown in Chapter 6 of IBM AS/400 Printing IV, GG24-4389. Appendix H. AS/400 to AIX printing 379 :WSCST DEVCLASS=TRANSFORM. :TRNSFRMTBL. :SPACE DATA ='20'X. :FORMFEED DATA ='0C'X. :LINEFEED DATA ='0A'X. :LPI LPI = 8 DATA ='0D'X. :EWSCST. The tags for SPACE, FORM-FEED, and LINEFEED are fairly obvious, converting those to the required ASCII equivalents. The LPI tag was inserted to resolve a problem we had at one account that was printing at 8 LPI and had more than 66 lines on the page. HPT was inserting a new form feed after 66 lines by default. This tag eliminated that problem, and has no effect on other spooled files. To create the WSCST, type the source into a Source Physical File member. The Type field for the member should be blank or *NONE. Use the CRTWSCST command to create the object. Here is an example of the command: CRTWSCST WSCST(mylib/FLATASCII) SRCMBR(FLATASCII) TEXT('Convert SCS to Flat ASCII') SRCFILE(mylib/mysrc) This WSCST can now be used in the SNDTCPSPLF command or in a definition for a Remote Output Queue. H.4.2 Sample page and form definition for STD132 The most common of the AS/400 spooled files has a record length of 132 and a page length of 66. Figure 274 on page 380 shows a sample of a form definition and page definition source used to format these files once they arrive on Infoprint Manager for AIX. This assumes they have been converted to ASCII using the above FLATASCII Workstation Customization Object. 380 IBM AS/400 Printing V Figure 274. Sample form and page definition source The source may be compiled using PPFA on either the AS/400 system or on the Infoprint Manager for AIX system. When the AS/400 WSCST converts an SCS spooled file to ASCII, it inserts a form feed at the end of every page, including the last page. The CONDITION TEST in the page definition prevents a blank page from being generated. H.4.3 Parmdd file The parmdd file may be used to set some of the parameters used by the line2afp program, which converts the “flat ASCII” data to AFPDS. Using a parmdd file is optional. You could specify these parameters in the Destination Options. There is a limit of 132 characters for DESTOPT, so we chose to use a parmdd file. Here is an example of a parmdd file: cc=yes cctype=z fdeflib=/u/afp/resources formdef=f1std132 pdeflib=/u/afp/resources pagedef=p1std132 inpexit=/usr/lpp/psf/bin/asciinpe The Parameter Descriptions are explained in the following list: • Cc=yes or no: Specifies whether the input file has carriage-control characters. – yes: The file contains carriage-control characters. “yes” is the default. – no: The file does not contain carriage-control characters. SETUNITS 1 IN 1 IN LINESP 8.8 LPI; FORMDEF STD132 OFFSET .25 .5 REPLACE YES; PAGEDEF STD132 REPLACE YES WIDTH 10 IN HEIGHT 7.5 IN DIRECTION DOWN; FONT CR13 CS 420090 CP V10500; PRINTLINE CHANNEL 1 REPEAT 66 FONT CR13 POSITION MARGIN TOP; ENDSUBPAGE; PRINTLINE CHANNEL 1 REPEAT 1 FONT CR13 POSITION 1 MM 1 MM; PRINTLINE REPEAT 1 FONT CR13 POSITION 1 MM NEXT; CONDITION TEST START 1 LENGTH 1 WHEN GE X'00' BEFORE SUBPAGE CURRENT CURRENT; Appendix H. AS/400 to AIX printing 381 Carriage-control characters, if present, are located in the first byte (column) of each line in a document. They are used to control how the line will be formatted (single space, double space, triple space, and so forth). In addition, other carriage-controls can be used to position the line anywhere on the page. If there are no carriage-controls, single spacing is assumed. • inpexit=/usr/lpp/psf/bin/asciinpe: Converts unformatted ASCII data into a record format that contains an American National Standards Institute (ANSI) carriage control character in byte 0 of every record, and then converts the ASCII stream data to EBCDIC stream data. • cctype=z: The file contains ANSI carriage-control characters that are encoded in ASCII. “z” is the default. For more information on other parameters used in the parmdd file, refer to the section on line2afp in the IBM Infoprint Manager for AIX Reference, S544-5475. H.4.4 Destination Options Destination Options provide a means to specify how a file being sent from the AS/400 system to a print server is to be processed. For a complete description of all available options, see the IBM Infoprint Manager for AIX Reference. The maximum length of the field is 132 characters. H.4.4.1 Basic SCS spooled file The DESTOPT parameter to match up an SCS printer file with the STD132 form definition appropriate parmdd file would look something like this example: -of=f1std132 -odatat=line -oparmdd=/u/afpres/parmstd132 See the previous section for information on the parmdd file. The form definition name ends up being specified twice. Once within the parmdd file, where it is used by the line2afp program and again in the Destination Options for use at print time. These destination options could be hard coded into a Remote Output queue. If you are using the Monitor program described above, you could create a lookup table that selects different Destination Options based on the spooled file name or other parameters. H.4.4.2 Overlays with the SCS file If you have an SCS spooled file that references an overlay, the overlay will not be sent (nor any reference to it) if the file is converted to ASCII using host print transform. An overlay reference can be added using the Destination Options. If the overlay name is always the same for given spooled files, you can use a lookup table. If not, for even greater flexibility use the QUSRSPLA API to retrieve spooled file attributes into a program. The overlay name is added to the Destination Options using the format: -ooverlay=myovl In the following example, we check to see if there is a value to OVL, and if so, it is concatenated to the DESTOPT field which will be used subsequently in the SNDTCPSPLF command: 382 IBM AS/400 Printing V IF COND(&OVL *NE '*NONE') THEN(CHGVARVAR(&DESTOPT) VALUE(&XOPT *BCAT '-ooverlay=' *CAT &OVL)) XOPT contains the base options as in H.4.4.1, “Basic SCS spooled file” on page 381. The overlay specified will print on every page of the document. If you have a different overlay specified as a BACKOVL in your AS/400 spooled file, you may need to build a page definition to handle this. Be aware that this may not be practical for OV/400 documents. H.4.4.3 User name If you are using the SNDTCPSPLF command, the user name that is printed on the Infoprint Manager for AIX cover sheet will be the name of the person issuing the SNDTCPSPLF command, not the user who created spooled file on the AS/400 system. If you are using a monitor program, it will be the person who started the monitor job. To help the users sort their output, use the -ouserid option in DESTOPT to specify the name, which will show up on the Infoprint Manager for AIX queues and on any cover sheets printed. The monitor program can obtain this value from the information it picks up from the data queue. Here is an example of adding the user name to the destination options (XOPT contains the base options as in H.4.4.1, “Basic SCS spooled file” on page 381): CHGVAR VAR(&DESTOPT) VALUE(&XOPT *BCAT '-ouserid=' *CAT &USER ) This is not a problem if you are using remote output queues. The name of the owner of the spooled file will print on the cover sheet. H.4.5 Output from the AS/400 query When a user generates a query report, there is an option to select the form size to print. When the file is printed on an AS/400 attached laser printer, the output could look quite different, depending on the value of width selected. • If the width is less than 85, the data is printed in portrait format at 10 characters per inch, 6 lines per inch. • If the width is greater than 85, but less than or equal to 132, the spooled file is generated at 10 cpi, but Computer Output Reduction is invoked, and the result is landscape print at 13.3 cpi, and approximately 8.5 LPI. • If the width is greater than 132, the spooled file is generated at 15 cpi. Computer Output Reduction is evoked, and the output is converted to 20 cpi. If the customer desires that the output have similar characteristics when printed via Infoprint Manager for AIX, it takes a little more work than needed for other system generated files. One cannot rely on a simple lookup by spooled file name. The attributes for CPI and WIDTH need to be retrieved using QUSRSPLA to determine the appropriate combination of form and page definitions to use. H.4.6 Transferring resources AS/400 overlays and page segments must be converted to a physical file before they can be transferred to AIX. Use the Convert Overlay to Physical File Member (CVTOVLPFM) and the Convert Page Segment to Physical File Member (CVTPAGSPFM) commands, which are included in AFPU/400 (Figure 275). Appendix H. AS/400 to AIX printing 383 Figure 275. Convert Overlay to PFM Make sure you select option 2 (Continuous) for the Format of data. The Convert Page Segment to PFM has a similar structure. Transfer the resource to the Infoprint Manager for AIX using FTP. Make sure the file is sent in binary format. Place the resource in a directory that will be found by Infoprint Manager for AIX. See Infoprint Manager for AIX Reference Manual, S544-5475, for information on the search order. H.4.7 Large spooled files In a couple of cases, customers have experienced problems sending very large spooled files from the AS/400 system to Infoprint Manager for AIX. Smaller spooled files work fine, but when they send large files, they receive error messages TCP3405 and TCP3701, Send Request Failed. No messages are issued on Infoprint Manager for AIX. It was ultimately determined to be a problem with the /var file space on the server side. Have the AIX System Administrator increase the size of this file space. H.5 Case studies The following case studies are based on actual Infoprint Manager for AIX customer situations. In some cases, the situations were simplified for emphasis of certain points. H.5.1 One printer, all AFPDS This customer had been a faithful user of AFP on the AS/400 system for some time, and most applications had already been formatted with DEVTYPE(*AFPDS). They were adding an Infoprint Manager for AIX server so they could share their large printer with other users. It was a fairly straightforward task to create one remote output queue to point to the Logical Destination used for the printer. The only destination option used was '-odatat=afpds'. Overlays and page segments had to migrate to Infoprint Manager for AIX. Convert Overlay to PFM Overlay . . . . . . . . . . : myovl Library . . . . . . . . . : mylib Type choices, press Enter. Format of data . . . . . . . 2 1=Fixed, 2=Continuous To file . . . . . . . . . . Myovlpf Name, *VM, *MVS Library . . . . . . . . . *CURLIB Name, *CURLIB To member . . . . . . . . . *OVL Name, *OVL Text 'description' . . . . . *OVLTXT Replace . . . . . . . . . . N Y=Yes, N=No Create file . . . . . . . . Y Y=Yes, N=No Text 'description' . . . . . Physical file for my overlay 384 IBM AS/400 Printing V H.5.2 One printer, four document types This example was not actually from an AS/400 host, but the situation can apply to the AS/400 system. This customer was migrating from a non-IBM printer to an IBM AFP Printer. They only had three distinct applications that required special formatting. All the rest was equivalent to STD132 earlier in this chapter. With the previously installed printer, all the formatting was done at the printer, so it was decided to maintain that philosophy by setting up four remote output queues on the host to point to separate Logical Destinations on Infoprint Manager for AIX, one per application and one for STD132. The rest of the resource selection was to be done based on Default Documents that were associated with the Logical Destinations. H.5.3 70 printers, 12 applications, SCS spooled files The customer had a third-party application package that generated SCS spooled files. They were migrating from impact printers using preprinted forms and had about 11 applications with specific formatting requirements for overlays and font changes, along with unformatted system printing. They could not modify the source. They chose to use form and page definitions to do their formatting. All the AFP resources were created on Infoprint Manager for AIX. Remote output queues with hard coded Destination Options to name resources would not be practical because 1400 would be required for all the valid combinations. A monitor program was written. On the AS/400 system, one (local) output queue was set up for every destination on Infoprint Manager for AIX. Each output queue pointed to one common data queue. The monitor program read the entries from the data queue and used lookup tables to match the name of the application to a Destination Option string, and the name of the AS/400 Output Queue to the name of an Infoprint Manager for AIX Logical Destination. This program also modified the user name as described above. H.5.4 Multiple printers, many data streams This customer was installing Infoprint Manager for AIX to share printing between AS/400, MVS, and LAN users on a wide variety of printers over four buildings, ranging from PCL printers to the IBM Infoprint 4000. On the AS/400 system, the applications included: • Basic system printing STD132 • Query reports of different sizes • SCS spooled files that had overlays • OV/400 documents, some of which had overlays • AFPDS spooled files. We started with the basic monitor, similar to the one used in the previous example. However, much more logic had to be applied to build the appropriate Destination Options and set the appropriate Transformation parameters in the SNDTCPSPLF command. Lookup tables were used to gather general information about each spooled file type, and to match up the target Logical Destination. The QUSRSPLA API was used to gather information such as data stream, if it was generated using Final Form Text Document Content Architecture, query size, Appendix H. AS/400 to AIX printing 385 overlay names, and user name. Some files were sent with TRANSFORM(*NO), some with MFRTYPMDL(*HP5SI), and others used the FLATASCII custom *WSCST. H.6 Sending AS/400 spooled files to OnDemand for UNIX Automating the process of moving AS/400 spooled files to an AIX platform for loading into OnDemand for UNIX can be accomplished. The degree of automation that you want, as well as the volumes that will be moved, will affect the effort you need to expend to get the job done. Following is an outline of the tasks required to automatically move financial statements from an AS/400 system to OnDemand for UNIX. IBM Global Services recently completed this work at a number of customer situations, with very satisfactory results. H.6.1 AS/400 side tasks You may also want to perform these tasks for the AS/400 system: • Write a program to monitor for spooled files entering an output queue (see above notes). • Using the spooled files APIs, open the spooled file object, extract the report data stream, and write it to a stream file in the IFS. • Write another program that uses the automated FTP functions to send the stream file in the IFS to the AIX server. H.6.2 AIX side tasks For AIX, you may want to perform these tasks: • Write a program which monitors the arrival of the stream file in the designated directory on the server. • Have the OnDemand load process execute to load the stream file received into the proper Application Group in OnDemand. While the high level tasks are quite straightforward, the details of the implementation are where you may become overwhelmed. For example, how do you differentiate between different report types, and how do you manage the growth of the addition of new report types over time? Or, how do you handle different data streams that may be created, SCS or AFPDS? In addition, the degree of automation required will determine how much effort you put into table definitions, error monitoring, reporting, and so on. These are all components of the implementation that add complexity and effort to the overall project. H.7 AS/400 printing to an Infoprint Manager for Windows NT or 2000 server Some of the techniques described above for printing to an Infoprint Manager for AIX server also apply to Infoprint Manager for Windows NT or 2000. The remainder of this section will refer to Windows NT, but it also applies to Infoprint Manager for Windows 2000. 386 IBM AS/400 Printing V Spooled files can be sent from the AS/400 system to the Infoprint Manager for Windows NT server via TCP/IP. This can be done using a Remote Output Queue or the SNDTCPSPLF command, as described in H.1.1.1, “Remote Output Queue” on page 367, and H.1.1.2, “SNDTCPSPLF command (LPR)” on page 369. PSF Direct is supported from the AS/400 system to Infoprint Manager for Windows NT. There is configuration documentation available on the IBM Printing Systems Division Web site at: http://www.printers.ibm.com/R5Psc.nsf/web/ntpsfd To use PSF Direct, you need the IBM SecureWay Communications Server product to communicate between the AS/400 system and Windows NT. The considerations presented in H.2, “AS/400 spooled file data streams” on page 372, regarding the different AS/400 spooled file data streams can be applied to sending those same types of files to Infoprint Manager for Windows NT. Using an Output Queue Monitor in conjunction with Infoprint Manager for Windows NT may still have a use if you want to automate the sending of different spooled file types to different Infoprint Manager for Windows NT logical destinations. H.7.1 Hypothetical case studies These scenarios have been verified to work. They are included here to illustrate the possible co-existence between AS/400 and Windows NT for printing. H.7.1.1 One channel attached printer, two hosts A customer currently has an IBM 3900-001 printer attached via parallel channel to a PSF/2 server, using Distributed Print Function (DPF). There are two AS/400 hosts sending data to the printer. The customer wants to move to a Windows NT solution. Infoprint Manager for Windows NT does not support DPF. Consequently, the customer will use PSF Direct and set up the PSF Configuration Objects on each AS/400 system so that the printer session will time out if there are no spooled files ready. H.7.1.2 Two printers, three applications A customer wants to use Infoprint Manager for Windows NT to share their two medium-speed printers with the AS/400 system and other LAN users. The AS/400 applications consists of invoices and statements that are already in AFPDS format, and other SCS printing generated using the default system printer files. They plan on using the STD132 form and page definitions as described in H.4.2, “Sample page and form definition for STD132” on page 379. This customer will set up four remote output queues on the AS/400 system. Two of these will be for the AFPDS spooled file that are to print on each of the two printers. These output queues will have TRANSFORM(*NO) specified. The target logical destinations will reference a default document that is set up for printing AFPDS by specifying: document-format=afpds Two other AS/400 output queues will be set up to handle the default system printing. They will be setup with TRANSFORM(*YES) and use the “Flat ASCII” Workstation Customization Object as described in H.4.1.1, “WSCST for ‘flat ASCII’” on page 378. They will point to two logical destinations that use a default Appendix H. AS/400 to AIX printing 387 document for printing that is similar to the one described in H.3.1, “Default Document” on page 376. H.8 Additional references For more information, please refer to the following publications: • AS/400e Printer Device Programming Version 4, SC41-5713 • Infoprint Manager for AIX Reference, S544-5475 • IBM Infoprint Manager for AIX PSF Direct: Network Configuration Guide for System/370, S544-5486 • IBM AS/400 Printing III, GG24-4028 • AS/400e System API Reference Version 4, SC41-5801 • IBM AS/400 Printing IV, GG24-4389 • Windows NT PSF Direct: AS/400 Configuration 388 IBM AS/400 Printing V © Copyright IBM Corp. 2000 389 Appendix I. Infoprint 2000 printing considerations The announcement of the IBM Infoprint 2000 Multifunctional Production System, Models RP1, NP1, and DP1 with its high speed cut sheet printing and duplicating did not provide a robust AS/400 print solution. The data streams supported initially are PostScript 3, PDF, PCL6 and LCDS/Metacode. The AS/400 system provides direct connection only via a remote outqueue and the use of the host print transform (HTP) functions as described in Chapter 6, “Host print transform” on page 137. The use of an intermediate system, such as the IBM Infoprint Manager AIX and other third-party solutions, also allows AS/400 spool output to printed on the Infoprint 2000. An IPDS version of the DP1 model will be offered at a later date, supporting the Advanced Function Presentation architecture’s Intelligent Printer Data Stream. Many of the installations of the Infoprint 2000 are for reprographics and network printing applications and the amount of print from the AS/400 system represents a small percentage of the total print workload. Other installations have been for specific applications that have used customized HPT or intermediate solutions. The following sections look at the considerations for print files and HPT and the use of an intermediate solution for application formatting. Note: The IPDS version of IBM Infoprint 2000 was announced in September 2000. I.1 Print file considerations and HPT formatting Printing directly from the AS/400 system to the Infoprint 2000 can result in many challenges and require changes in the operations procedures. The AS/400 spool output (Data Type=*SCS and *AFPDS only) must be converted into ASCII. A supplied or custom HPT will be used. The AS/400 HPT objects will create ASCII data streams for PCL, pure ASCII or image. One or more of these HPTs may be used to provide optimum results. If a HPT is being used today for other ASCII printers that support PCL, then the results should be identical. If twinax attached printers or AFP printers are being used, differences and limitations may apply. The remote print writer (STRRMTWTR) is an automated LPR to the printer queue. Some of the limitations of the HPT and remote writers are: • No Forms mount messages (ignored) • No Page Range printing (unsupported program is available) • Copies are transmitted individually (XAIX parameter in the outqueue) • No multi-up support • SCS and AFPDS data types only supported (*USERASCII is passed through) • DDS functions, such as scale and rotate of page segments, are not supported • Draw commands that print in the ‘no print borders’ will be adjusted into the print area The primary output of AS/400 applications are business oriented, for example Invoices, Packing List, Labels (with barcodes), reports, and so on. The unique functions of Infoprint 2000, like the production of booklets and other output formats produced by PC and network applications, are not usually necessary. 390 IBM AS/400 Printing V Therefore, the primary objective will be to produce business output on Infoprint 2000 at a rated speed, meeting the business objectives of the organization. One requirement that has been and will remain important is maintaining the integrity of the printed page. Printing on Infoprint 2000, a local network printer or an AFP Printer should provide similar results. The fonts should be mapped, boxes and lines appear in the same place, graphic objects reproduced accurately, and so on. The differences that exist in the hardware and software technology may not map one to one. Therefore, content integrity is the default. The objective is to minimize the differences. The print management functions that can be specified in the Print File using the CHGPRTF, OVRPRTF, and CRTPRTF commands and the function of the native print writer give the application developer control over the printing process. Many of these print file parameters are ignored by the AS/400 HPT processing of the Remote Writer. For example, jobs that have an overlay specified in the print file to merge SCS output with a form, will ignore the overlay, printing the data not the form. Overlays and Page Segments referenced in the DDS (externally described print file) of *AFPDS output will be processed and converted. Customization of the HPT table may be necessary to specify input and output options on the printer. We reccommend that you use he latest printer microcode. Understanding the PCL data stream is necessary if special functions are to be implemented. Infoprint 2000 will honor the PCL drawer selections with the Release 3.0 Version 3.15 of the printer microcode. Control of SCS and IPDS printers have allowed jobs with different characteristics to be sent to the same output queue. The AS/400 writer would assist in managing the workflow with operator messages, workload balancing, and so on. With the Remote Print Writer and HPT, it may be necessary to create many output queues, one for each job characteristic that will be coded in a unique HPT. In many accounts, this will require additional operator instructions or changes to current instructions that will place job print integrity on the operator. One of the restrictions encountered in early installations was the use of simplex and duplex printing on pre-punched paper. The 3130 and 3160s used by the account had edge sensitivity and would rotate the pages for proper printing. This now requires planning of drawers for simplex and duplex, and the input bin changed in the print file. The HPT object used or customized will need to send the correct escape sequence for that drawer. Custom HPT information is provided in Chapter 6 of this manual. Additional HPT information is available on the AS/400 Web site in Rochester. The knowledge base under the category of Print has setup, customization, and PTF information. Information provided includes remote print writer considerations, the page range program, TCP/IP printing, and HPT customization. I.2 Infoprint Manager and other solutions The use of a print solution other than the native AS/400 writers has been an option for years. Many of these solutions can be applied to printing from the AS/400 system to the Infoprint 2000. IBM Infoprint Manager has been used for printing of AFPDS on the printer. All of the necessary AFP resources are loaded into the AIX system as described in Chapter 2 of AS/400 Printing IV, GG24-4389. Overlays, page segments, and fonts are then processed by Infoprint Manager and Appendix I. Infoprint 2000 printing considerations 391 delivered to the printer. Operator control of the printing is moved from the AS/400 system to the Infoprint Manager system. With Infoprint Manager, printing control is robust. Some of the other solutions for formatting spooled files that can be used are applications like Create!Print, Planet Press, and so on. These solutions either interface with the AS/400 spool and create USERASCII output on the AS/400 system or rely on the AS/400 system to provide a trigger on the front of the document that will invoke a PostScript Macro on the printer to format the data. The processing of the SCS spooled files into ASCII with a trigger that will invoke the PostScript application will require either AS/400 application modification and the use of the *WSCSTNONE transform or a custom HPT that will add the trigger to an existing application’s spooled file. If multiple trigger applications are needed, each will require a customized WSCST and its own outqueue. The system supplied host print transform for outputting ASCII is the *WSCSTNONE transform. To create a custom WSCST with a trigger requires the retrieval and modification of a HPT that outputs ASCII. The retrieval of the ASCII HPT and the modification of the HPT are the initial sequence to invoke the PostScript trigger application. The WSCST source that was retrieved (RTVWSCST) from the IBM provided ASCII HPT (*WSCSTNONE) is shown in the example in Figure 276. Figure 276. WSCT source The trigger required by this application was the insertion of a cover page containing the form name on the front of the spooled file. To do this, we modify the initial printer sequence sent to the printer by the host print transform in the print writer. The ASCII hex value was determined for the form name, and the hex value '0C' is the form feed or page eject. Line 5 of the WSCST source above was changed from a hex value of '00' to the ASCII value for trigger name, HRFORM1, followed by a form feed of hex '0C'. The value became DATA ='4852464F524D310C'X. This custom HPT was then saved and compiled using the CRTWSCST command and added to the remote outqueue description (WRKOUTQD). Every job that is processed through this outqueue will arrive at Columns . . . : 1 71 Browse AGROSE/QTXTSRC SEU==> ASCII FMT ** ...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 *************** Beginning of data ************************************* 0000.01 :WSCST DEVCLASS=TRANSFORM. 0000.02 0000.03 :TRNSFRMTBL. 0000.04 :INITPRT 0000.05 DATA ='00'X. 0000.06 :SPACE 0000.07 DATA ='20'X. 0000.08 :CARRTN 0000.09 DATA ='0D'X. 0000.10 :FORMFEED 0000.11 DATA ='0C'X. 0000.12 :LINEFEED 0000.13 DATA ='0A'X. 0000.14 :EWSCST. ****************** End of data **************************************** 392 IBM AS/400 Printing V the printer as pure ASCII and have a header page with the word HRFORM1 as its only content. All other printer file values specified, such as cpi, font, simplex, duplex, etc., are ignored. The revised customization object is shown (Figure 277) and is compiled using the CRTWSCST command. Figure 277. Revised customization list I.2.1 Another application solution The controller for the Infoprint 2000 Reprographics System can be modified to provide additional application flexibility. In one installation, a shareware PostScript macro was used to produce two-up on the printer. The technique is similar to the trigger application. The PostScript shareware was installed on the printer controller, and a monitor was invoked to scan the input stream for the processing options supported Since two-up could be either simplex or duplex, additional WSCST tags were used on the HPT. They processed specific print file attribute and imbedded keyword triggers in the beginning of the spooled file sent to the printer. The tags for simplex, duplex, and tumble printing were added to the *WSCSTNONE retrieved object. The AS/400 spooled file could specify simplex, duplex, or duplex tumble for the output. Printing on three hole paper requires that the holes be on different sides in the input drawers for simplex and duplex. Logic was also added to the scan program to submit the print job to use the correct drawer. The following lines were added after line 13 to the unmodified WSCST above: 0000.14 :SMPXPRT 0000.15 DATA ='53494D504C45580A'X. 0000.16 :DUPXPRT 0000.17 DATA ='4455504C45580A'X. 0000.18 :DUPXPRT 0000.19 DATA ='54554D424C450A'X. The result was an ASCII file arriving in Infoprint 2000 with three additional lines of data added to the beginning of the ASCII spooled file, with the value in the third line being the trigger for the two-up application with logic to choose drawers Columns . . . : 1 71 Browse AGROSE/QTXTSRC SEU==> ASCII FMT ** ...+... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 *************** Beginning of data ************************************* 0000.01 :WSCST DEVCLASS=TRANSFORM. 0000.02 0000.03 :TRNSFRMTBL. 0000.04 :INITPRT 0000.05 DATA ='4852464F524D310C'X. 0000.06 :SPACE 0000.07 DATA ='20'X. 0000.08 :CARRTN 0000.09 DATA ='0D'X. 0000.10 :FORMFEED 0000.11 DATA ='0C'X. 0000.12 :LINEFEED 0000.13 DATA ='0A'X. 0000.14 :EWSCST. ****************** End of data **************************************** Appendix I. Infoprint 2000 printing considerations 393 based on the physical paper orientation needed for printing simplex and duplex two-up. Note that pure ASCII and PCL, should not be mixed within a file. The PCL escape sequences will be treated as data once the printer decides that the data stream is not PCL. Each of these solutions requires a knowledge of the data stream provided by the AS/400 system, the impact of the HPT on document integrity, and the application capability of any intermediate system. Placing shareware into the printer controller also requires UNIX knowledge. Each of these solutions has strengths and weaknesses and may be a custom installation. I.2.2 Operator considerations The current procedures used for operations control may need to be modified, often significantly, to insure print integrity. Print jobs may need to be modified to direct them to the new queue or operator procedures may need to be updated if the operators moved jobs from a job queue to the printer device queue. Any job where print file attributes are ignored, need to be handled as an exception and may require a unique output queue. If the job is moved into the incorrect queue, there is no checking of the data, and it is possible to experience a higher number of print errors and reprints. Additional reasons for multiple PCL queues may be the need to modify rotation to force portrait for applications where there is no source or the print file cannot be modified. Special line spacing is often desired for Computer Output Reduction (COR) to better fit the page or to offset for prepunched paper, functions that could be handled automatically by the PSF/400 writer. Each exception needs to be documented, and training needs to be provided for increased operator training. Page restart integrity now is an operator task with the remote outqueue. For example, if it was necessary to restart a print job on an AS/400 system attached 3130 Printer, the PSF/400 writer would determine the exact page to restart based on the position on the page. A two-up duplex job actually contains four logical pages. If a jam occurred while printing pages 17 through 20, the AS/400 system would resend beginning at page 17. The page restart for the remote outqueue does not consider page position because the multi-up is done by the printer, allowing resending at any page number. Since the AS/400 system assumes at once LPR is complete, the job has printed, the disposition of print jobs may need to be set to SAVE=*YES. This allows the resending of jobs. 394 IBM AS/400 Printing V © Copyright IBM Corp. 2000 395 Appendix J. Printing enhancements in recent OS/400 releases This appendix summarizes the enhancements in OS/400, Print Services Facility/400 (PSF/400), and related printing software in the last four releases—V4R2 through V4R5. J.1 Version 4 Release 5 Version 4 Release 5 includes the following enhancements: • SNMP ASCII Printer Driver OS/400 • SNMP ASCII Printer Driver for IBM Infoprint 21 • Expanded printer speed ranges for PSF/400 • Type Transformer for Windows Another enhancement in the V4R5 time frame, but not part of the release, is AFPDS/IPDS support for OneWorld, an ERP e-business solution from J. D. Edwards. J.1.1 SNMP ASCII printer driver The Simple Network Management Protocol (SNMP) ASCII Printer Driver is a new printer driver for TCP/IP attached printers. This printer driver provides the function found in the PJL printer driver but does not require the printer to support PJL commands. With the SNMP printer driver, there are now three ASCII printer drivers: • LPR, or remote output queue • PJL printer driver • SNMP printer driver See 11.2.3, “Configuring LAN-attached ASCII printers using SNMP drivers” on page 246, for more information. J.1.2 SNMP driver for Infoprint 21 A special version of the SNMP printer driver is provided for IBM Infoprint 21. IBM Infoprint 21 is a new generation network printer for the AS/400 system. It is the first IBM AS/400 printer to use the IBM Homerun controller. The Homerun controller is designed with the capabilities of the IBM Advanced Function Common Controller Unit (AFCCU), the standard controller for high-speed printers) but geared for lower-speed network printers. The spooling design incorporated within the Homerun controller greatly enhances print performance in a network environment. However, there can be incompatibilities when using the PJL printer driver. The SNMP ASCII printer driver is the recommended printer driver for Infoprint 21. Support for the SNMP printer driver for Infoprint 21 is also available for V4R3 and V4R4 via a PTF. The SNMP printer driver is only needed when Infoprint 21 is used as a PCL printer. When used as an Intelligence Printer Data Stream (IPDS) printer, then PSF/400 is the printer driver. 396 IBM AS/400 Printing V J.1.3 PSF/400 printer ranges PSF/400 is licensed by printer speed ranges. Each licensed speed range enables an unlimited number of printers (for the licensed AS/400 system) within that range. With V4R5, the printer speed ranges have been expanded as follows: • 1 to 28 pages per minute • 1 to 45 pages per minute • Any speed (1 to unlimited pages per minute) J.1.4 AFP Font Collection bundled with PSF/400 AFP Font Collection, the comprehensive set of AFP fonts for the AS/400 system (and other servers), is now bundled with new orders of PSF/400 (beginning with V4R5). J.1.5 Type Transformer for Windows Type Transformer for Windows, a font conversion and editing platform, became available in June 2000. Type Transform for Windows, a feature of AFP Font Collection (5648-B45), enables conversion of Adobe and TruType fonts to AFP fonts for use with AS/400 print and presentation applications. Type Transformer also includes utilities for editing individual font characters and for editing code pages. For details on Type Transformer for Windows, see 4.11, “Creating AFP fonts with Type Transformer” on page 110. J.1.6 AFP/IPDS support for OneWorld OneWorld, a leading ERP software solution from J. D. Edwards, now has integrated support for AFPDS/IPDS. This support shipped in October 2000 with OneWorld Xe. With this support, any OneWorld application output can be created in either AFP or PDF format. J.2 Version 4 Release 4 The following AS/400 program products have been enhanced with this new release: • OS/400 5769-SS1 • Print Services Facility/400 (a feature of OS/400) 5769-SS1 • Advanced Function Printing Utilities for AS/400 5769-AF1 • AFP Font Collection 5648-B45 • Content Manager OnDemand for AS/400 5769-RD1 The new DDS keywords support these features: • Switch between simplex and duplex printing within a spooled file • Force printing on a new sheet of paper anywhere in a spooled file • Direct selected pages of a spooled file to a specific output bin • Tabbed insert pages from a finisher anywhere within an output file • Specify z-fold options for any page within an output file • Include an overlay and specify the orientation (rotation) at which the overlay should be printed Appendix J. Printing enhancements in recent OS/400 releases 397 New printer file functions include: • Print overlays on the back side of pages without any variable data • Specify that output should be corner-stapled, edge-stitched, or saddle-stitched as a printer file option. J.2.1 Simplex/duplex mode switching DDS This DDS keyword allows you to switch back and forth between simplex and duplex mode when printing. This is useful when parts of a job should be simplex and other parts should be duplex. Setting the proper mode can improve job throughput. J.2.2 Force new sheet DDS When printing in duplex mode, the Force new sheet DDS keyword provides control of the sheet in addition to the side. Execution of this keyword forces a new sheet to be selected regardless of whether you are currently on the front side or back side of the sheet in process. J.2.3 Output bin DDS This keyword enables DDS-level (for example, page level) control of output bin. Prior to this support, all pages in a spooled file went to the output bin defined in the printer file. J.2.4 Insert DDS As part of the finishing options added during the V4 releases, the insert DDS keyword enables insertion of a sheet from the inserter (for example, as found on the Infoprint 60 finisher) within the current print job. This provides for inclusion of such booklet inserts as cover sheets, back pages, and tab sections. J.2.5 Z-fold DDS Certain output finishers (for example, the Infoprint 60 finisher) support the z-fold operation. This operation takes an 11 by 17 inch page (for example, spreadsheet) and “z-folds” it down to 8 ½ by 11 inch size. This is handy to include large format pages in a standard size booklet. J.2.6 Overlay rotation DDS This DDS parameter for the OVERLAY keyword provides the capability to change the orientation of overlays on the page. This avoids the need to have the same overlay stored multiple times in different orientations. J.2.7 Constant back overlay in the printer file A new printer file keyword provides the capability to print an overlay on the back side (duplex side) of a sheet without application data. This capability is useful in any application where application data is to be printed on the front side and static data on the back side. An example would be a customer invoice where the back side of the sheet has static terms and condition information that is put there as an overlay. 398 IBM AS/400 Printing V J.2.8 Print finishing Support for stapling options, initially supported in V4R2, is now added directly to the printer file. CORNERSTPL, EDGESTITCH, and SADLSTITCH are the keywords. For example, CORNERSTITCH(*TOPLEFT) causes the print job to staple in the top left corner of the page. The function selected must be supported on the specified printer (for example, Infoprint 32, Infoprint 40, Infoprint 60). J.2.9 AS/400 font management AS/400 applications can use both AS/400-resident and printer-resident fonts. The mapping table that manages font selection and substitution is now user-modifiable using the PSF configuration object. This enables you to control font fidelity for your applications across a variety of different printers with greater flexibility and precision. J.2.10 Advanced Function Printing Utilities (AFPU) enhancements AFPU provides a set of supporting functions for advanced output applications, including electronic form design and management of forms and images. Enhancements with V4R4 include new barcode symbol, color support, and improved image handling. J.2.11 Content Manager OnDemand for AS/400 Content Manager OnDemand is a comprehensive archival system for the AS/400 system. It supports the organization, indexing, storage, retrieval, viewing, faxing, printing, and network presentation of AS/400 documents, reports, and other objects. V4R4 implements the OnDemand user interface into Operations Navigator. In addition, OnDemand is now integrated with EDMSuite ContentConnect, which provides Web access across multiple document repositories. Web access is provided by NetConnect. J.3 OS/400 Version 4 Release 3 In OS/400 Version 4 Release 3, Print Services Facility/400 and associated native OS/400 print support (Printer File and DDS) have been enhanced. They provide new application capabilities and take advantage of new printers and printer attachments. These enhancements include: • Integration of AFP Workbench into Client Access/400 • DDS indexing keyword to support for archiving and viewing applications • Support for line data formatting enhanced • Automatic resolution enhancement • Font performance improvement reduces CPU utilization • Sizing and rotation for page segments • Enhanced PostScript transform • IPDS Pass-through • Enhanced AFP Font Collection with support for euro, expanded languages • Availability of new versions of Advanced Print Utility, Page Printer Formatting Aid, AFP Toolbox, and SAP R/3 AFP Print (members of the AFP PrintSuite family) These are explained in greater detail in the following sections. Appendix J. Printing enhancements in recent OS/400 releases 399 J.3.1 Integration of AFP Workbench into Client Access/400 Functions in the AFP Workbench that had previously been an optional, priced feature of Client Access/400 are now integrated as part of the product. Client Access/400 (CA/400) users can now view any document on their PC or in a CA/400 shared folder that is in AFP, ASCII, TIFF, PCX, DCX, or DIB data format. They can also use AFP Workbench Viewer to create page segments (images) or overlays (electronic forms) from any PC application program and upload them to OS/400 for printing with OS/400 applications. This AFP Printer Driver for Windows can also be downloaded from the Web site at: http://www.ibm.com/printers/as400 For additional details, see Chapter 5, “The IBM AFP Printer Driver” on page 117. J.3.2 Indexing keyword in DDS The AFP presentation architecture includes support for indexing fields in a print record to be used for navigation by an archival/retrieval program, or by a document viewing or browsing program (such as the AFP Viewer in CA/400). In Version 4 Release 3, Data Description Specifications (DDS) has been enhanced to enable specification of fields in a record as AFP index fields. Output from these applications can now be used with archival/retrieval programs for fast retrieval of individual pages or groups or pages in the archive. Archival programs that use AFP index fields include IBM Content Manager/OnDemand for AS/400, OnDemand for AIX, and OnDemand for NT. OnDemand for AS/400 was formerly named RDARS/400. The AFP Viewer in CA/400 also supports using index information in documents to quickly locate any group of pages within a spooled file, PC file, or shared folder. J.3.3 Support for line data enhanced Support for generating line data from AS/400 applications and using page and form definitions to format output external to the application program was introduced in Version 3 Releases 2 and 7. However, many applications from third-party vendors, as well as customer applications, could not take advantage of this powerful new formatting capability because they were already formatted using DDS (although the DDS specifications were simple). In Version 4 Release 3, new OS/400 system function has been provided to automatically convert output from programs that use DDS into line data so that they can be formatted using all the capabilities of AFP page definitions and form definitions objects. Page and form definitions are created using Page Printer Formatting Aid (PPFA/400) or similar products. PPFA/400 is a component of the AFP PrintSuite for AS/400. See Chapter 3, “Enhancing your output” on page 67, for more information on formatting application output with PPFA/400. J.3.4 Automatic resolution enhancement Many current IBM AS/400 IPDS printers print at a resolution of 600 dpi. However, applications may have been developed to use raster fonts in 240 dpi or 300 dpi resolutions. New multiple resolution font support in OS/400 Version 4 Release 3 provides the capability for these applications to take advantage of the increased print quality of new printers without application or resource changes. PSF/400 will coordinate with the printer to download the best resolution to enable the printer to render a requested font at 600 dpi. 400 IBM AS/400 Printing V Note that resolution enhancement applies to fonts. For images, page segments that are in Image Object Content Architecture (IOCA) are resolution-independent and will be rendered at the resolution of the target printer. For older page segments that use the IM1 format, those are converted to IOCA when possible. When such a conversion is not possible, the image is rendered “as is”. This will result in a change in the size of the image if the IM1 resolution and the printer resolution are different. J.3.5 Font performance improvement For applications that use AFP fonts downloaded to a printer that supports both raster and outline fonts, a performance enhancement in OS/400 V4R3 can result in a reduction in CPU utilization of 50 to 70%. This represents a significant improvement in system performance for customers who print on IBM high-speed AFP printers, or for customers running mid- to high-speed printers on a CPU-constrained system. J.3.6 Sizing and rotating page segments Support for page segments (image objects) has been enhanced with new DDS options for dynamic sizing and rotation. This allows you to create one page segment (a company logo, for example) and dynamically size or rotate it based on the needs of each different print application. Previously, a separate page segment object was required for every size and rotation required across all of your printing applications. With this support, only one version of an image is required. The rotation and scaling of page segment images is done in the printer. Therefore, only certain printers are supported (those with printer controllers). J.3.7 Enhanced PostScript transform The transformation of PostScript files, part of Image Print Transform services included in V4R2, is enhanced to provide Double-Byte Character Set (DBCS) support. Using this support enables PostScript files to be transformed to AFPDS or PCL and routed to either AFP or PCL printers. This PostScript transform handles all PostScript L1 functions and some of PostScript L2 functions. See Chapter 7, “Image print transform” on page 161, for information on Image Print Transform. J.3.8 IPDS pass through IPDS pass through is now a standard printer file parameter. With IPDS pass-through, you can significantly increase overall printing performance to IPDS printers for print files not requiring advanced IPDS services. See Appendix A, “PSF/400 performance factors” on page 279, for more information on IPDS pass-through. J.3.9 AFP Font Collection with Euro, expanded languages AFP Font Collection (program 5648-B45), the one-stop resource for AFP fonts, has been repackaged. It now includes support for additional languages and support for the euro currency symbol. AFP Font Collection is a comprehensive set of AFP fonts with over 1,000 fonts from the most popular type families. Such family examples include Times New Roman, Helvetica, and Courier. These fonts come in a full range of sizes, resolutions (240, 300, and outlines), and languages Appendix J. Printing enhancements in recent OS/400 releases 401 (over 48). See Chapter 4, “Fonts” on page 89, for additional information on AFP Font Collection. J.3.10 AFP PrintSuite for AS/400 AFP PrintSuite for AS/400 is a family of products for formatting application output into advanced electronic documents. This family includes: • Advanced Print Utility (APU) • Page Printer Formatting Aid/400 (PPFA/400) • AFP Toolbox for AS/400 • SAP R/3 AFP Print Each of these electronic document products was enhanced with new versions in May 1998 (V3R7M1). For additional details on the changes to APU, see Chapter 2, “Advanced Function Presentation” on page 35. J.4 OS/400 Version 4 Release 2 Changes in V4R2 include: • OS/400 V4R2, including Image Print Transform services • Print Services Facility/400 V4R2, including PostScript support, outline fonts, font capture, cut sheet emulation, and finishing • AFP Utilities V4R2, with enhancements to electronic form creation on the AS/400 system • New and revised guides to AS/400 printing include this redbook and AS/400 Guide to Advanced Function Presentation and Print Services Facility S544-5319. J.4.1 OS/400 Image Print Transform Services OS/400 adds a new subsystem to support documents and files in PostScript print format as well as the TIFF, GIF, and BMP image file formats. These are common formats found in network applications. This new subsystem, called Image Print Transform, will transform those input formats into AFP, PCL, or PostScript format. These transforms are invoked automatically as part of the normal print process or invoked through an API as a standalone process. Let's look at the automatic process first. An application, such as one running on a CA/400 client or the IBM Network Station, generates a PostScript file to an AS/400 output queue. If a writer is started to that queue going to an IPDS printer, then PSF/400 will take control. When it starts to process the PostScript file, it passes control to the Image Print Transform subsystem to convert the PostScript to AFP. The Image Print Transform, in turn, uses a new object—the Image Configuration Object—to provide additional information on how to do the conversion. The converted AFP is passed back to PSF/400, which sends it out (as an Intelligent Printer Data Stream) to the printer. The automatic process could also be routed to a PCL printer. In this case, host print transform would receive the PCL data stream from Image Print Transform services and send it on to a PCL printer. 402 IBM AS/400 Printing V The Image Print Transform process can also be run via an API. Here, the input files might reside on the IFS file system. The API can be run to convert the file formats to memory, to another file, or to an output queue. For example, you may want to “preprocess” a PostScript file to AFP prior to putting it on the output queue to speed up the printing process. In addition, there are also non-print applications that may require the kind of transform services provided by this new subsystem. These new transform services add to the transform facilities already provided by the AS/400 system: • AFP to PCL • AFP to TIFF • SCS to ASCII • SCS to TIFF See Chapter 7, “Image print transform” on page 161, for more information about Image Print Transform. J.4.2 Support for outline fonts Outline fonts are now supported on the AS/400 system. Outline fonts are familiar to PC users because they are standard with TruType and Adobe Type 1 fonts. To date, the AS/400 system has only supported raster (also known as bitmapped) fonts. With raster fonts, each character in each font, in each point size, is an image. All of these images are stored on the AS/400 system (as entries in a font character set object). They are downloaded to the printer when they are referenced in an application. In contrast, outline fonts are vector representations of a font. There is only one small object required for all point sizes. Any point size can be selected, as compared to a limited number of sizes with raster fonts. Both AS/400-resident outline fonts (available through AFP Font Collection, for example) and printer-resident outline fonts are supported. Outline fonts improve printing performance, require less printer memory, and provide unlimited size selections. J.4.3 Font capture Font capture is a font performance enhancement that retains (“captures”) a font on the printer hard drive. Font capture has a significant impact when the connection to the printer is slow or when large Double Byte Character Set (DBCS) fonts are used. DBCS fonts are graphic-type fonts used in indeographic languages such as Japanese and Chinese. Font capture also has an impact with Single Byte Character Set (SBCS) fonts, but it may be less. J.4.4 Cut-sheet emulation Cut-sheet emulation ensures that duplex applications, created for cut-sheet printers, print correctly on two-up duplex production printers (such as the Infoprint 4000 production printer family). The output from continuous-form production printers (after the forms have been sliced in two and interleaved by postprocessing) is identical to the output from a cut-sheet duplex printer. This capability allows you to easily take advantage of the increased speed and reliability of continuous form printers without changing your operating procedures or programs. Appendix J. Printing enhancements in recent OS/400 releases 403 J.4.5 Finishing support Initial finishing support was added in V4R2. This support included: • Edge stapling • Corner stapling • Saddle stitch • Insert • Z-fold Edge, corner, and saddle stitching were added to the printer file using the USRDFNDTA keyword. The syntax for the USRDFNDATA keyword is: 'CORNERSTPL(*TOPLEFT)') Insertion and z-fold (as well as the stapling and stitching options) were added to the AS/400 page and form definitions. PPFA/400 or other similar tools for building AS/400 page and form definitions could be used to enable these functions. Certain finishing operations require a combination of DDS and the form definition (at V4R2). These functions were integrated in DDS and the printer file at V4R4. See J.2, “Version 4 Release 4” on page 396, for the current support for finishing. J.4.6 TCP/IP configuration enhancements Several changes were made to the session management of TCP-IP connected IPDS printers. With the Automatic Session Recovery keyword (AUTOSSNRCY), you can specify if you want PSF/400 to automatically reconnect on a TCP/IP network error. With the acknowledgment frequency keyword (ACTFRQ), you can set how often to query the printer for the updated page counter. A high frequency setting minimizes the number of reprinted pages on a network error, but may reduce performance slightly. The remote location name, port, and activation timer were removed from the PSF configuration object (PSFCFGOBJ). These keywords were moved to the printer device description. J.4.7 Font substition messages A new parameter in the PSF Configuration Object provides control over whether font substitution messages are logged to the message queue. See Appendix 11.1.2, “Configuring LAN-attached IPDS printers on V3R7 and later” on page 230, for more information. J.4.8 AFP Utilities for V4R2 AFP Utilities is a set of three supporting utilities for AFP applications on the AS/400 system. The Overlay Utility allows you to create electronic forms from any AS/400 terminal. The Print Format Utility is an AFP version of Query/400. It creates AFP applications directly from AS/400 database files. Resource Management Utility enables you to manage overlay and image resources on the AS/400 system. For details on V4R2 enhancements, see 2.3, “AFP Utilities/400 V4R2 enhancements” on page 45. 404 IBM AS/400 Printing V © Copyright IBM Corp. 2000 405 Appendix K. Using the additional material This redbook also contains additional Web material. See the appropriate section below for instructions on using or downloading this material. K.1 Locating the additional material on the Internet The CD-ROM, diskette, or Web material associated with this redbook is also available in softcopy on the Internet from the IBM Redbooks Web server. Point your Web browser to: ftp://www.redbooks.ibm.com/redbooks/SG242160 Alternatively, you can go to the IBM Redbooks Web site at: ibm.com/redbooks Select the Additional materials and open the directory that corresponds with the redbook form number. K.2 Using the Web material The additional Web material that accompanies this redbook includes the following: File name Description hpttoflr.c HPTTOFLR Transform spooled file (uses QwpzHostPrintTransform) hpttoflr.cmd HPTTOFLR transform spooled file and write to folder sg242160.pdf First edition of the Printing V redbook ws_ftp.log FTP log file K.2.1 How to use the Web material Create a subdirectory (folder) on your workstation and copy the contents of the Web material into this folder. 406 IBM AS/400 Printing V © Copyright IBM Corp. 2000 407 Appendix L. Special notices This publication is intended to help customers, business partners, and IBM system engineers who need to understand the fundamentals of printing on the AS/400 system to help them develop, or advise others about the design and development of AS/400 printing applications. The information in this publication is not intended as the specification of any programming interfaces that are provided by Print Services Facility/400, PrintSuite/400, AFP Utilities/400, and IBM Font Collection. See the PUBLICATIONS section of the IBM Programming Announcement for Print Services Facility/400, PrintSuite/400, AFP Utilities/400, and IBM Font Collection for more information about what publications are considered to be product documentation. References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594 USA. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The information about non-IBM ("vendor") products in this manual has been supplied by the vendor and IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. 408 IBM AS/400 Printing V The following document contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples contain the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel IBM  Redbooks Redbooks Logo Advanced 36 Advanced Function Printing AFCCU AFP AIX APL2 APPN AS/400 AS/400e AT BookMaster ContentConnect CT Current EDMSuite GDDM ImagePlus InfoWindow Intelligent Printer Data Stream IPDS Manage. Anything. Anywhere. Netfinity Network Station OfficeVision OfficeVision/400 Operating System/400 OS/2 OS/400 Print Services Facility RMF RS/6000 S/390 SecureWay SOMobjects SP System/370 System/390 WIN-OS/2 Wizard XT 400 Lotus Freelance Graphics Word Pro Notes Tivoli TME NetView Cross-Site Tivoli Ready Tivoli Certified Planet Tivoli Appendix L. Special notices 409 Corporation in the United States and/or other countries. UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others. 410 IBM AS/400 Printing V © Copyright IBM Corp. 2000 411 Appendix M. Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. M.1 IBM Redbooks For information on ordering these publications see “How to get IBM Redbooks” on page 415. • Inside AS/400 Client Access for Windows 95/NT, Version 3 Release 1 Modification 2, SG24-4748 • Managing AS/400 V4R4 with Operations Navigator, SG24-5646 The following publications are available online in softcopy format only at the redbooks home page at: http://www.redbooks.ibm.com At the site, click Redbooks Online and enter the book number in the search field that appears. Click Submit Search. When the search results appear, click the appropriate book title. • AS/400 Printing III, GG24-4028 • AS/400 Printing IV, GG24-4389 M.2 IBM Redbooks collections Redbooks are also available on the following CD-ROMs. Click the CD-ROMs button at ibm.com/redbooks for information about all the CD-ROMs offered, updates and formats. M.3 Other resources These publications are also relevant as further information sources: • IBM AFP Fonts: Font Samples, G544-3792 • IBM AFP Fonts: Font Summary, G544-3810 • IBM AFP Fonts: Licensed Program Specifications, G544-5229 • Ethernet and Token-Ring Configuration Guide, G544-5240 • Twinax/Coax Configuration Guide, G544-5241 • Infoprint Hi-Lite Color Introduction and Planning Guide, G544-5420 CD-ROM Title Collection Kit Number IBM System/390 Redbooks Collection SK2T-2177 IBM Networking Redbooks Collection SK2T-6022 IBM Transaction Processing and Data Management Redbooks Collection SK2T-8038 IBM Lotus Redbooks Collection SK2T-8039 Tivoli Redbooks Collection SK2T-8044 IBM AS/400 Redbooks Collection SK2T-2849 IBM Netfinity Hardware and Software Redbooks Collection SK2T-8046 IBM RS/6000 Redbooks Collection SK2T-8043 IBM Application Development Redbooks Collection SK2T-8037 IBM Enterprise Storage and Systems Management Solutions SK3T-3694 412 IBM AS/400 Printing V • IBM Intelligent Printer Data Stream Reference, S544-3417 • IBM AFP Fonts: Technical Reference for Code Pages, S544-3802 • IBM Page Printer Formatting Aid: User's Guide, S544-5284 • IPDS and SCS Technical Reference, S544-5312 • AS/400 Guide to AFP and Print Services Facility, S544-5319 • PCL5e and Postscript Technical Reference, S544-5344 • Advanced Function Printing Utilities for OS/400, S544-5349 • AS/400 Advanced Print Utility Users's Guide, S544-5351 • IBM AFP Toolbox for AS/400 User's Guide, S544-5368 • SAP R/3 AFP Print for AS/400 User's Guide, S544-5412 • Infoprint Manager for AIX Reference, S544-5475 • IBM Infoprint Manager for AIX PSF Direct: Network Configuration Guide for System/370, S544-5486 • Ethernet and Token Ring Configuration Guide (Infoprint 21), S544-5711 This publication is available in softcopy only at: http://publib.boulder.ibm.com/pubs/pdfs/prsys/54457112.pdf • AFP Traditional Chinese Font Catalog, SC18-0124 • AFP Simplified Chinese Font Catalog, SC18-0133 • AFP Thai Font Catalog, SC18-0137 • AFP Japanese Font Catalog, SC18-2332 • Mixed Object Document Content Architecture Reference, SC31-6802 • Client Access/400 Personal Communications 5250 Reference Guide, SC41-3553 • AS/400 Workstation Customization Programming, SC41-3605 • AS/400 DDS Reference - Version 3, SC41-3712 • AS/400 Printer Device Programming - Version 3, SC41-3713 • AS/400 System API Reference - Version 3, SC41-3801 • Description Specifications Guide, SC41-4712 • OS/400 Printer Device Programming V4R2, SC41-5713 • AS/400 System API Reference - Version 4, SC41-5801 • Setting Up Printing in an Office Vision/400 Environment, SH21-0511 The following publications are available in softcopy only from the AS/400 Online Library at: http://as400bks.rochester.ibm.com/pubs/html/as400/onlinelib.htm At the site, select your language and click GO!. Click V3R2 and then Search or view all V3R2 books. In the search field that appears, enter the book number and click Find. Select the appropriate title that appears. • Internet Packet Exchange (IPX) Support, SC41-3400 • AS/400 TCP/IP Configuration and Reference - Version 3, SC41-3420 Appendix M. Related publications 413 M.4 Referenced Web sites These Web sites are also relevant as further information sources: • The latest version of the AFP Printer Driver may be obtained from: http://www.printers.ibm.com/afpdr.html • The AS/400 Programming Sampler is available online from the AS/400 printing Web site at: http://www.printers.ibm.com/products.html • Information about the Uniform Code Council and UCC/EAN-128 can be found online at: http://www.uc-council.org • Certain softcopy technical references can be downloaded free from the Web at: http://www.printers.ibm.com/manuals.html • Token-Ring and Ethernet microcode can be accessed online at: http://www.printers.ibm.com/util.html • For information regarding Network Printer Manager, visit the Web site at: http://www.printers.ibm.com/npm.html • Selected DDS source code and a comprehensive library of AFP application examples can be found online in the AS/400 AFP Programming Sampler at: http://www.printers.ibm.com/as400 • A “Printing from Java” overview for developers, written by Sun, can be found online at: http://java.sun.com/products/jdk/1.1/docs/guide/awt/designspec/ printing.html • Visit the IBM Redbooks home page at: http://www.redbooks.ibm.com • The manual, PSF Direct: AS/400 Configuration, written for Infoprint Manager for Windows NT, can be found online at: http://www.printers.ibm.com/R5Psc.nsf/web/ntpsfd • There is a no-charge utility available to assist in loading your IBM AFP Font Collection (5648-B45) fonts in the special (QFNT01 to QFNT19) libraries. It can be found online at: http://www.ibm.com/printers/as400 414 IBM AS/400 Printing V © Copyright IBM Corp. 2000 415 How to get IBM Redbooks This section explains how both customers and IBM employees can find out about IBM Redbooks, redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided. • Redbooks Web Site ibm.com/redbooks Search for, view, download, or order hardcopy/CD-ROM Redbooks from the Redbooks Web site. Also read redpieces and download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. • E-mail Orders Send orders by e-mail including information from the IBM Redbooks fax order form to: • Telephone Orders • Fax Orders This information was current at the time of publication, but is continually subject to change. The latest information may be found at the Redbooks Web site. In United States or Canada Outside North America e-mail address pubscan@us.ibm.com Contact information is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl United States (toll free) Canada (toll free) Outside North America 1-800-879-2755 1-800-IBM-4YOU Country coordinator phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl United States (toll free) Canada Outside North America 1-800-445-9269 1-403-267-4455 Fax phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl IBM employees may register for information on workshops, residencies, and Redbooks by accessing the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button. Look in the Materials repository for workshops, presentations, papers, and Web pages developed and written by the ITSO technical professionals; click the Additional Materials button. Employees may access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements. IBM Intranet for Employees 416 IBM AS/400 Printing V IBM Redbooks fax order form Please send me the following: We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment. Title Order Number Quantity First name Last name Company Address City Postal code Telephone number Telefax number VAT number Invoice to customer number Country Credit card number Credit card expiration date Card issued to Signature © Copyright IBM Corp. 2000 417 Index Symbols *AFPDS 5 *AFPDSLINE 5 *HPCP 104 *HPFCS 104 *IPDS 4 *LINE 5 *NOCOPY 63 *NORMAL 63 *NOWAIT 176 *PHCP 104 *PHFCS 103 *REPRINT 63 *SCS 4 *USERASCII 5 *USRDFNTXT 176 Numerics 2000-sheet finisher 214 240-pel fonts 94 300-pel euro symbol support 94 300-pel fonts 95 4230/4214 emulation mode 276 4247 native mode 277 A A3 page support 274 action group entry 58 activation timer 260 Advanced Function Common Control Unit (AFCCU) 16 Advanced Function Presentation AFP resources 42 APU print model 37 AS/400 AFP model 35 creating AFP resources 43 overview of AFP on the AS/400 system 35 page and form definitions print model 41 PFU print model 39 toolbox print model 42 what is AFP 35 Advanced Function Presentation (AFP) 35 Advanced Function Presentation (AFP) Printer Driver 117 Advanced Function Printing AFP resource retention 282 AFPDS to ASCII transform 142 Advanced Function Printing Data Stream (AFPDS) 5, 162 Advanced Function Printing Utilities (AFPU) enhancements 398 Advanced Print Utility APU default setup 71 APU enhancements 49 APU environment 69 APU monitor enhancement 52 APU print model 37 configure APU Monitor Action 57 creating the print definition 72 fonts 69 print using the APU monitor 80 starting APU Monitor 65 using Advanced Print Utility 69 using APU monitor 56 working with the print definition 74 Advanced Print Utility (APU) 49, 67, 69 AFCCU (Advanced Function Common Control Unit) 16 AFCCU printers, stopping and starting 264 AFP (Advanced Function Presentation) 35 AFP compatibility fonts 93 AFP Font Collection 95 with PSF/400 396 AFP Font Collection with euro 400 AFP fonts created with Type Transformer 110 AFP model 35 AFP Printer Driver 117 creating an AFP document 134 creating an overlay 122 creating efficient AFP resources 284 creating page segment 126 file transfer of AFP resources using FTP 130 images dialog box 130 installing AFP printer driver 118 other AFP printer driver tasks 130 overview 117 performance of AFP printer driver 134 text or image 129 why use the AFP printer driver 117 AFP PrintSuite for AS/400 401 AFP resource creation 43 AFP resources 42, 43, 84 AFP toolbox print model 42 AFP Utilities 45, 403 AFP Workbench in Client Access/400 399 AFP/IPDS support for OneWorld 396 AFPDS (Advanced Function Printing Data Stream) 5 AFPDS line data stream 5 AFPDS spooled file 23, 25 AFPU V4R2 enhancements 45 application programming interface ESPDRVXT (print driver exit) 307 ESPTRNXT (writer transform exit) 307 QGSCPYRS (copy AFPDS resources) 307 QGSLRSC (list spooled file AFPDS resources) 307 QIMGCVTI (convert image) 168 QSPBOPNC (build open time commands) 307 QSPBSEPP (build separator page) 307 QSPEXTWI (extract writer status) 306 QSPSETWI (set writer status) 306 QSPSNQWM (send writer message) 307 QWPZHPTR (host print transform) 307 APU (Advanced Print Utility) 49, 67 APU default setup 71 APU environment 69 APU monitor 56, 65 first version 80 new version 80 418 IBM AS/400 Printing V APU monitor action 57 APU monitor enhancement 52 APU print engine 66 APU print model 37 APU versus PPFA 88 AS/400 AFP model 35 AS/400 font management 398 AS/400 network printers 205 AS/400 output on a PC printer 194 AS/400 print profile 191 ASCII data stream 5 ASCII printers 165, 277 attached to displays 17 attached to PCs 18 LAN-attached 19 ASCII printing 277 attachment methods ASCII printers attached to displays 17 ASCII printers attached to PCs 18 ASCII printers LAN-attached 19 IPDS printers LAN-attached 16 printers attached to PSF Direct 20 printers attached to PSF/2 DPF 21 printers attached to WSC or 5x94 15 automatic resolution 399 auxiliary tray 213 B BCOCA application 332 BCOCA object 332 bin and tray selection 212 BOOTP (Bootstrap Protocol) 237 Bootstrap Protocol (BOOTP) 237 C calculated processor utilization 329 CDEFNT (Coded Font) 92 Change Spooled File Attributes (CHSPLFA) 2 characters per inch (CPI) 92 CHSPLFA (Change Spooled File Attributes) 2 Client Access/400 printing AS/400 print profile 191 AS/400 printer to Windows 95 186 considerations on CA/400 network printing 193 network printer setup 191 Network Printing function 186 overview 185 Printer Definition Table (PDT) 200 printer emulation 194 printing AS/400 output on PC printer 194 Coded Font (CDEFNT) 92 commands ADDNTWAUTE (Add NetWare Authentication) 182 CHGSPLFA (Change Spooled File Attributes) 2 CRTOUTQ (Create Output Queue) 172 CRTPSFCFG (Create PSF Configuration (V3R2)) 227 CRTWSCST (Create WSCST) 152 ENDWTR (End Writer) 3 RTVWSCST (Retrieve WSCST) 152 STRNTWCNN (Start NetWare Connection) 182 STRPRTWTR (Start Print Writer) 3 STRRMTWTR (Start Remote Writer) 176 WRKCFGSTS (Work wit Configuration Status) 3 WRKOUTQ (Work with Output Queue) 2 WRKSPLF (Work with Spooled Files) 2 WRKSPLFA (Work with Spooled File Attributes) 2 communication problems 253 comparison of printing rate 326 comparison of processor requirements 328 Computer Output Reduction (COR) 273 CONFIG MENU 210 configuration problems 253 configuring for remote system printing 258 connection problems 253 constant back overlay in the printer file 397 Content Manager OnDemand for AS/400 398 convert image API 168 convert image API (QIMGCVTI) 163 converting PostScript data streams 168 copying spooled files 269 COR 101 COR (Computer Output Reduction) 273 COR top margin 218 CPI (characters per inch) parameter 92 create source for form and page definitions 82 creating an overlay 122 creating page segment 126 customer-defined font ranges 106 cut-sheet emulation 402 D data description specification (DDS) 287 data streams AFPDS (Advanced Function Printing Data Stream) 5 AFPDSLINE (AFPDS line data stream) 5 AS/400 generated IPDS (full IPDS) 8 data streams on the AS/400 system 3 IPDS (Intelligent Printer Data Stream) 4 LINE (Line data stream) 5 SCS (SNA Character String) 4 USERASCII (ASCII data stream) 5 DBCS support in host print transform 156 DBCS AFPDS to ASCII transform 157 DBCS EBCDIC to ASCII transform 156 DBCS SCS to ASCII transform 156 new tags 157 new WSCST objects 157 supported data streams 157 DDS (data description specification) 287 DDS functionality example 287 DDS functionality specification 290 destination options 176 DESTOPT parameter 176 device description 224, 230 DEVTYPE 28 disabling resident font support 106 drawer and paper path selection problems 276 dual-configuration printer 207 dual-shared configuration printer 207 Index 419 duplex 49 duplicate IP address 260 E edge-to-edge printing 221 element repeat 47 End Writer (ENDWTR) 3 ending the writer 261 ENDWTR (End Writer) 3 enhancing your output 67 euro with AFP Font Collection 400 externally-described printer file 36 F FGID (Font Global Identifier) 91 fidelity parameter 269 File field 64 finishing support 403 FNTCHRSET (Font Character Set) 92 FONT (Font Global Identifier) 91 font capture 402 font capturing 108 Font Character Set (FNTCHRSET) 92 Font Character Set to Font ID 102 Font Global Identifier (FGID) 91 Font ID to Font Character Set 102 Font ID to Font ID 102 font mapping 101 font performance 400 font resource 108, 109 font substitution 169 font substitution messages 102, 275, 403 font table command 105 font table customization 103 font table entry 104 font tables 103 fonts 43, 69, 93 240-pel fonts available at a charge 94 300-pel fonts available at a charge 95 AFP Font Collection 95 at no charge 93 customer-defined font ranges 106 disabling resident font support 106 font substitution 101 font tables customization 103 host-resident fonts 90 installation 96 making the fonts available 97 outline fonts 99 PostScript 168 printer-resident fonts 89 problems 274 selection 91 storage 89 suppressing font substitution messages 102 user-supplied 169 using resource library list 107 force new sheet DDS 397 form and page definition object 85 form and page definitions 84, 86 form definition 43, 47 Form Type field 64 formatting 287 G Graphics Interchange Format (GIF) 161 H hardware 318 Hewlett-Packard Printer Control Language (PCL) 162 Hold field 64 host print transform 13, 332 AFPDS to ASCII transform 142 AFPDS to TIFF 150 create WSCST object 152 customization 151 DBCS support 156 enabling host print transform 140 enhancements 138 mapping mode 143 new and enhanced tags for WSCST objects 152 new MFRTYPMDL special values 154 no print border (AFPDS to ASCII) 149 NOPRTBDR tag example 142 overview 137 process 139 processing AFP resources 148 processing barcodes 148 raster mode 146 retrieve WSCST object 152 SCS to ASCII transform 140 transform limitations 150 transform spooled file, write to folder 150 unprintable border 141 Host to Printer-resident Code Page 104 Host to Printer-resident Font Character Set 104 host-resident fonts 90 host-resident outline font 48, 100 HPT ASCII printers versus PSF/400 IPDS 32 I IBM 4247 paper path selection 276 IBM 5x94 15 I-DATA-7913 LAN attachment 237 iIndexing keyword in DDS 399 image compression 329 image configuration objects 166 image print transform 14, 161 Advanced Function Printing Data Stream (AFPDS) 162 convert image (QIMGCVTI) API 168 converting PostScript data streams 168 definition 161 Graphics Interchange Format (GIF) 161 Hewlett-Packard Printer Control Language (PCL) 162 image configuration objects 166 OS/2 and Windows Bitmap (BMP) 161 420 IBM AS/400 Printing V PostScript Level 1 161, 162 printing 165 printing to an ASCII printer 165 printing to an IPDS printer 166 printing with image print transform 165 process 163 sending the spooled files 166 Tag Image File Format (TIFF) 161 troubleshooting 170 using 162 Image Print Transform Services for OS/400 401 implementing a printing concept 27 Infoprint 21 SNMP driver 395 Infoprint 4000 318 Infoprint 60 318 InfoWindow displays 17 input spooled file action 60 input spooled file selection criteria 59 input trays 213 insert DDS 397 installing AFP printer driver 118 Intelligent Printer Data Stream (IPDS) 4 INVNEW1 program 292 IP4000 318 IP4000 printer 325 IP60 printer 323 IPDS (Intelligent Printer Data Stream) 4 IPDS MENU 211 IPDS menu PAGE setting 218 IPDS pass through 282, 400 IPDS printer 166 IPDS printers LAN-attached 16 IPDS spooled file 23, 24 IPDS, AFP=*NO 216 IPDS, AFP=*YES 216 J J option 176 J.D. Edwards OneWorld Xe 396 L LAN-attached ASCII printers 19, 238 using LexLink 238 using PJL drivers 241 using SNMP drivers 246 LAN-attached IPDS printers 16, 223, 257 on V3R2 224 on V3R7 and later 230 LAN-attached printers 223 library list searches 284 line data 399 line data stream 5 line printer daemon (LPD) 172 line printer requester (LPR) 172 load letter message 179 logical and physical page origin 217 logical page 271 M mailbox bins 214 mapping mode, processing AFP fonts 143 message queue full 260 messages PQT3603 255 PQT3630 268 microcode 212 mixed data stream 5 monitor actions example 54 monitor example 53 Multiple Text Mapping 50 MULTIUP 101 N NetWare *BANNER option 184 *NOWAIT option 184 Add NetWare Authentication 182 AS/400 and NetWare Printing 181 Create Output Queue 182 preparing for remote system printing 182 Start NetWare Connection 182 NetWare printing 181 Network Printer 12/17 221 Network Printer 24 221, 318 Network Printer Manager 215 network printers 205 attachment information 215 configuration scenarios 206 dual-configuration printer 207 edge-to-edge printing 221 IPDS menu PAGE setting 218 LAN-attached IPDS printer 206 microcode 212 Network Printer Manager 215 output presentation 216 overview 205 printer menu details 209 printer setup 209 setup 191 shared dual-configuration printer 207 shared multi-purpose printer 208 tray and bin selection 212 using the QPRTVALS Data Area 217 Network Printing considerations 193 Network Printing function for CA/400 186 network station local printing 311 print driver 309 printing 309 printing from Java 311 NP24 printer 322 O OEM products 45 Offset Stacker/Jogger 214 Omit Back Side Page Layout 47 OneWorld AFP/IPDS support 396 Index 421 OS/2 and Windows bitmap (BMP) 161 OS/400 Image Print Transform Services 401 OS/400 printing enhancements 395 outline font 99, 100 outline font support 52, 402 output attributes 165 output bin DDS 397 Output bin field 65 output bins 214 Output device parameter 64 output enhancement 67 Advanced Print Utility 67 Page Printer Formatting Aid 67 using Advanced Print Utility (APU) 69 using PPFA 81 output from an AS/400 on a PC printer 194 output presentation 33, 216 output queue 172 NetWare 182 spooled files 1 Output queue parameter 64 output queue status 263 output spooled file action 60, 62 overlay 43 overlay rotation DDS 397 Overlay Utility 45 P page and form definitions print model 41 page definition 43 Page Printer Formatting Aid compile the form and page definitions 84 create source for form and page definition 82 create the form and page definition objects 85 page and form definitions print model 41 printing with form and page definitions 86 using PPFA 81 Page Printer Formatting Aid (PPFA) 67, 81 page segment 43 PAGE setting 218 PAGE=COMP1 220 PAGE=COMP2 220 PAGE=PRINT 219 PAGE=WHOLE 218 PAPER MENU 209 PCs connected to ASCII printers 18 PDT (printer definition table) 200 performance AFP resource retention 282 AS/400 system storage 279 clear memory for security 283 creating efficient AFP resources 284 data stream type 280 font types 283 IPDS pass through 282 library list searches 284 printer device description parameters 282 printer file parameters 285 printer settings 285 PSF configuration object parameters 285 performance monitor 319 PFU (Print Format Utility) 39 PFU print model 39 physical page 271 ping TCP/IP address 254 PJL (Print Job Language) 255 port number 254 positioning data logical page 271 physical page 271 problem with output presentation 271 PostScript data streams 168 PostScript fonts 168 PostScript Level 1 161, 162 PostScript transform 400 PPFA (Page Printer Formatting Aid) 67, 81 PPFA versus APU 88 PQT3603 message 255 PQT3630 message 268 print and convert mode 322 print criticality 27 print definition 65, 72, 74 testing 79 Print definition parameter 63 print engine 66 print finishing 398 Print Format Utility 47 Print Format Utility (PFU) 39 Print Job Language (PJL) support 255 print openness additional functions 306 additional functions on the output queue commands 305 additional functions on the printer file 304 additional functions on the PRTDEVD commands 304 new APIs 306 print output 68 print output requirements 27 print profile 191 Print Services Facility/400 9 PSF/400 already installed 11 PSF/400 IPDS printers versus HPT ASCII printers 32 PSF/400 process 10 Print to Host-resident Font Character Set 103 print writer 8 printer attached methods 32 printer attachment methods 15 printer definition table (PDT) 200 printer DEVTYPE 28 printer emulation session 194 printer ends 259 printer file DDS 287 printer file device type 27 printer menu details 209 printer ranges for PSF/400 396 printer requirements 30 printer setup 209, 273 Printer to Host-resident Code Page 104 printer type 48 printer writer 6 422 IBM AS/400 Printing V printer-resident fonts 89 printers attached to PSF Direct 20 printers attached to PSF/2 DPF 21 printers attached to WSC or 5x94 15 printer-writer-related problems 259 printing AS/400 output on PC printer 194 printing concept 27 considerations 32 enhancing your output presentation 33 print criticality 27 print output requirements 27 printer attachment methods 32 printer file device type 27 printer requirements 30 PSF/400 IPDS printers versus HPT ASCII printers 32 type of printers 30 writer supporting printer DEVTYPE 28 printing enhancements for OS/400 395 printing from Java 311 printing on ASCII printers 277 printing on the AS/400 data streams on the AS/400 system 3 host print transform 13 image print transform 14 implementing a printing concept 27 output queues 1 Print Services Facility/400 9 printer attachment methods 15 printer writer 6 Printing SCS, IPDS, AFPDS, and USERASCII spooled files 23 remote system printing 22 spooled files 1 printing on the AS/400 system 1 printing SCS, IPDS, AFPDS, and USERASCII spooled files 23 printing with image print transform 165 print-resident font 319 problem determination A3 page support 274 additional information 278 AFCCU printers, stopping and starting 264 communication, connection, configuration problems 253 computer output reduction 273 configuring LAN-attached IPDS printers 257 drawer and paper path selection problems 276 fidelity parameter 269 font problem 274 message PQT3603 255 message PQT3630 268 output queue status 263 ping TCP/IP address 254 port number 254 Print Job Language (PJL) support 255 printer setup 273 printing on ASCII printers 277 problem with output presentation 271 problem with shading 276 QSTRUP execution during IPL 264 remote printer queue name 258 setting up TCP/IP network on AS/400 253 spooled file goes to hold status 266 spooled file status 262 spooled files remain in PND status 261 spooled files remain in RDY status 260 SSAP values in line description 253 where your print output goes 265 writer cannot re-direct spooled file 267 problem determination techniques 253 problem with output presentation 271 program, INVNEW1 292 program-described printer file 36 PSF configuration object 227, 234 V3R7 and later 234 PSF Direct connected to printers 20 PSF trace 319 PSF/2 DPF connected to printers 21 PSF/400 IPDS printers versus HPT ASCII printers 32 PSF/400 printer ranges 396 PSF/400 spooled file conversion 331 PSF/400 V4R2 performance 317 PSF/400 V4R2 performance parameter 318 PSF/400 V4R2 software 317 PSF/400 with AFP Font Collection 396 PTFs 278 Q QFNT240LA1 96 QFNT300CPL 96 QFNT300LA1 96 QFNTCDEPAG 96 QFNTCFOLA1 96 QFNTCPL 96 QIMGCVTI 163, 168 QPRTVALS data area 217 QSTRUP execution during IPL 264 QSTRUP program at IPL 264 changing 264 queues to be monitored 57 R raster mode processing AFP fonts 146 recommended PTF levels 212 record format 301 release timer 260 remote printer queue 173 remote printer queue name 258 remote system printing 22, 182, 258 create output queue 172 destination options 176 line printer daemon (LPD) 172 line printer requester (LPR) 172 load letter message 179 NetWare printing 181 overview 171 remote printer queue 173 remote printer queue name 258 Index 423 separator pages 178 Start Remote Writer 176 TCP/IP LPR-LPD printing 172 XAIX option 176 XAUTOQ option 177 resident font support 106 resource library list 107 rotating and sizing page segments 400 S Save field 65 scalable fonts for MULTIUP and COR 101 SCS (SNA Character String) 4 SCS mode 216 SCS spooled file 23 separator pages 178 shading at different resolutions 276 shared multi-purpose printer 208 simplex/duplex mode switching DDS 397 sizing and rotating page segments 400 SNA Character String (SCS) 4 SNMP ASCII printer driver 395 SNMP driver for Infoprint 21 395 software, PSF/400 V4R2 317 source physical file 82 Source Service Access Point (SSAP) 253 spooled files copying 269 copying spooled files 269 hold status 266 image print transform 166 output queue 1 printing AFPDS spooled files 25 printing IPDS spooled files 24 printing SCS spooled files 23 printing USERASCII spooled files 25 printing USERASCII spooled files with the image print transform 26 remain in PND status 261 remain in RDY status 260 status 262 writer cannot re-direct 267 SSAP (Source Service Access Point) 253 SSAP values in line description 253 Start Printer Writer (STRPRTWTR) 3 STRPRTWTR (Start Printer Writer) 3 substitution messages for fonts 275 suppressing font substitution messages 102 switching DDS 397 T Tag Image File Format (TIFF) 161 TCP/IP ping TCP/IP address 254 setting up TCP/IP network on AS/400 253 TCP/IP BOOT Service (V4R1 and later) 237 TCP/IP LPR-LPD printing 172 TCP/IP BOOT Service (V4R1 and later) 237 TCP/IP configuration 403 TCP/IP network on AS/400 253 TEST MENU 209 TOKEN RING and ETHERNET MENU 210 transform spooled file, write to folder 150 tray and bin selection 212 troubleshooting print image transform 170 tutorial 48 TWINAX SCS MENU 210 TWINAX SETUP MENU 210 Type Transformer 110 Type Transformer for Windows 396 types of printers 30 U User data field 64 User exit after field 65 User exit before parameter 63 User exit middle field 64 USERASCII spooled file 23, 25 printing with image print transform 26 user-supplied fonts 169 V view electronic form on PC 45 W where print output goes 265 Work Station Customizing Objects (WSCST) 14 Work with Configuration Statu (WRKCFGSTS) 3 Work with Output Queue (WRKOUTQ) 2 Work with Spooled File Attributes (WRKSPLFA) 2 Work with Spooled Files (WRKSPLF) 2 workstation controller 15 Workstation Customizing Object *WSCSTA3 155 *WSCSTA4 155 *WSCSTA5 155 *WSCSTB4 155 *WSCSTB5 155 *WSCSTCONT132 155 *WSCSTCONT80 155 *WSCSTEXECUTIVE 155 *WSCSTLEGAL 155 *WSCSTLETTER 155 *WSCSTNONE 155 WRKCFGSTS (Work with Configuration Status) 3 WRKOUTQ (Work with Output Queue) 2 WRKSPLF (Work with Spooled Files) 2 WRKSPLFA (Work with Spooled File Attributes) 2 WSCST (Work Station Customizing Objects) 14 X XAIX option 176 XAUTOQ option 177 Z z-fold DDS 397 424 IBM AS/400 Printing V © Copyright IBM Corp. 2000 425 IBM Redbooks review Your feedback is valued by the Redbook authors. In particular we are interested in situations where a Redbook "made the difference" in a task or problem you encountered. Using one of the following methods, please review the Redbook, addressing value, subject matter, structure, depth and quality as appropriate. • Use the online Contact us review redbook form found at ibm.com/redbooks • Fax this form to: USA International Access Code + 1 914 432 8264 • Send your comments in an Internet note to redbook@us.ibm.com Document Number Redbook Title SG24-2160-01 IBM AS/400 Printing V Review What other subjects would you like to see IBM Redbooks address? Please rate your overall satisfaction: O Very Good O Good O Average O Poor Please identify yourself as belonging to one of the following groups: O Customer O Business Partner O Solution Developer O IBM, Lotus or Tivoli Employee O None of the above Your email address: The data you provide here may be used to provide you with information from IBM or our business partners about our products, services or activities. O Please do not use the information collected here for future marketing or promotional contacts or other communications beyond the scope of this transaction. Questions about IBM’s privacy policy? The following link explains how we protect your personal information. ibm.com/privacy/yourprivacy/ (0.5” spine) 0.475”<->0.873” 250 <-> 459 pages IBM AS/400 Printing V ® SG24-2160-01 ISBN 0738419443 INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks IBM AS/400 Printing V A primer on AS/400 printing in today’s networked environment Configuration, performance, problem determination, enhancements In-depth education on AFP and ASCII printing This IBM Redbook describes how to use printing functions on the AS/400 system. It supplements the standard reference documents on AS/400 printing by providing more specific “how to” information, such as diagrams, programming samples, and working examples. It addresses the printing function found in OS/400, Print Services Facility/400 (PSF/400), Advanced Print Utility, Page Printer Formatting Aid, AFP Font Collection, and other print-enabling software. The original edition applied to Version 3 Release 2 for CISC systems and Version 4 Release 2 for RISC systems. This second edition includes information about the new functions that are available in releases up to and including Version 4 Release 5. This document is intended for customers, business partners, and IBM systems specialists who need to understand the fundamentals of printing on the AS/400 system. It is designed to help you develop or advise others concerning the design and development of AS/400 printing applications. This document is not intended to replace existing AS/400 printing publications, but rather to expand on them by providing detailed information and examples.

© Copyright IBM Corp. 2003. All rights reserved. ibm.com/redbooks 1 Redbooks Paper Bringing PHP to Your IBM iSeries Server Hypertext Preprocessor (PHP) is a powerful server-side scripting language for the Apache Web server. PHP is popular for its ability to process database information and create dynamic Web pages. Server-side refers to the fact that PHP language statements, which are included directly in your HTML, are processed by the Web server. Scripting language means that PHP is not compiled. Since the results of processing PHP language statements is standard HTML, PHP-generated Web pages are quick to display and are compatible with most all Web browsers and platforms. PHP is for the open source Apache community as Net.Data® is for the IBM® community. To “run” PHP scripts with your HTTP Server (powered by Apache), a PHP engine is required on your IBM^™ iSeries™ server. The PHP engine is an open source product, so this IBM Redpaper demonstrates the process to download, compile, install, and then configure PHP on your iSeries. It explains how to install versions 4.3.0 and the older version 4.2.2 of PHP. The PHP engine is available both as an Apache module and a CGI-BIN. Support for PHP as a module is not yet available for OS/400®. The step-by-step implementation discussed in this Redpaper involves the CGI version of PHP running in OS/400 Portable Application Solutions Environment (PASE). This allows you to run AIX® binaries directly on an iSeries. It includes the necessary patches for the minor modifications needed to the PHP source code. Note: If you want to know why this is so great, see the article “Programming with PHP on the iSeries” for iSeries Network by David Larson and Tim Massaro. You can find this article on the Web at: http://www.iseriesnetwork.com/resources/artarchive/ index.cfm?fuseaction=viewarticle&CO_ContentID=15746&channel=art&PageView=Search With permission from iSeries Network, we include the article in this Redpaper. To skip the article, go to “Prerequisites” on page 11. David Larson Bryan Logan Tim Massaro Heiko Mueller Brian R. Smith 2 Bringing PHP to Your IBM ^ iSeries Server Programming with PHP on the iSeries server Hypertext Preprocessor Language is a powerful, server-side scripting language for Web page creation. Scripting language means PHP requires no compilation, very much like Perl or Rexx. Because PHP is a server-side language, you can include it directly in HTML, and it is recognized and processed by a Web server. The first “P” in PHP is a remnant from the original acronym for Personalized Home Page. This was the term that PHP creator Rasmus Lerdorf used when he first used a set of Perl scripts to monitor access to his online resume. Since then, however, PHP has become the most popular optional module configured on Web servers. See the following Web sites: http://www.netcraft.com/survey http://www.securityspace.com/s_survey/data/man.200204/apachemods.html Here, we introduce the PHP language and walk you step-by-step through configuring PHP to access DB2® Universal Database™ (UDB) from your Apache Web server. Then, we provide examples to show you how iSeries shops can use PHP to create dynamic Web pages based on new or existing iSeries DB2 UDB databases. What is PHP? PHP code can easily access database files and output HTML, resulting in non-static, up-to-date Web pages. It's a technique similar to JavaServer Pages (JSPs) or Common Gateway Interface (CGI) binary (often called CGI-BIN) programming. Also, PHP is an opensource project. Open-source code can be useful if you want to tweak the behavior of PHP, but it's even more valuable because there are many open-source PHP applications and code samples available on the Web. This means you can get a new PHP Web project up and running quickly with little investment. Restriction: PHP is not supported on the iSeries server by IBM. We have provided these instructions for you to download a public domain open-source copy of a PHP engine to allow you to run PHP scripts on the iSeries server. Specifically, the following items are not supported by IBM:  The open-source CGI-BIN based PHP engine  Any of the PHP scripts that you would write against this PHP engine  The other open source tools described in this Redpaper used for building the PHP engine. These items are fully supported by IBM:  5722-SS1 Option 33 OS/400 PASE and all utilities supplied with it  The VisualAge® C++ compilers  The HTTP Server (powered by Apache) support for OS/400 PASE CGIs Bringing PHP to Your IBM iSeries Server 3 Hundreds of ready-made applications written in PHP are available as shareware, and many commercial products employ it. Until recently, PHP has enjoyed a reputation for reliability and security. See “Beware of PHP bugs” on page 11. Figure 1 shows the difference between standard static Web pages and dynamic Web pages using server-side PHP processing. In the first scenario on the left, a standard URL request arrives at the Web server asking for Web page http://www.somepage.html. The Web server sees this request and returns the HTML that is in the file somepage.html. In the second scenario on the right, the index.php file contains the special

Standard HTML Page with PHP HelloWorld Generated with PHP"; ?> This is as simple as it gets. The file starts as a normal HTML file. We simply insert PHP statements following the The result of our PHP program would be similar to what is shown in Figure 2. This is a dynamic Web page that contains the name of our Web server and a table built by PHP with details about how PHP is configured on our Web server. This is accomplished by using one of several predefined PHP variables (for example HTTP_HOST) and the PHP function phpinfo. Bringing PHP to Your IBM iSeries Server 5 Figure 2 Dynamic Web page generated by the PHP ‘Hello World’ script PHP on iSeries An iSeries user has two options to set up PHP. You can use PHP with OS/400’s Portable Application Solutions Environment (OS/400 PASE) and the HTTP Server (powered by Apache). Or you can install a Linux logical partition (LPAR) and run Apache and PHP in that partition. Table 1 shows factors to consider before you make this decision. Table 1 Which is for you? PHP as a CGI in PASE versus PHP in a Linux LPAR Factors to consider PHP in PASE and Apache PHP in Linux LPAR OS/400 requirements You should have V5R1 or newer. This should work for V4R5 - but we have not tried it ourselves. You must have V5R1 or newer with specific hardware to run Linux. Cost A cost is associated with PASE (becomes free in V5R2). Note: in V5R1 and prior releases PASE was a nominal fee of around 100 US dollars. A cost is associated with Linux distribution. Setup required No setup is required to use PASE. Some setup associated with the creation of a Linux partition, user IDs, etc., and extra LPAR requires some dedicated processor resource. 6 Bringing PHP to Your IBM ^ iSeries Server If you plan to install PHP on an iSeries server you need to be at V5R1 or later (as we mentioned in Table 1 this could work for V4R5 - but we have not tried it ourselves) and have PASE installed. PASE is the AIX runtime support for iSeries. See “Prerequisites” on page 11 to see if you have the “right stuff” for running PASE on your AS/400® or iSeries hardware. If you plan to install PHP on a Linux LPAR, PHP is most likely included with your Linux distribution. If not, the installation instructions are virtually identical to those found in the PHP distribution itself and in the PHP site FAQs at: http://cvs.php.net/co.php/php4/INSTALL Regardless of where you install PHP, the configuration is the same. To get the Apache Web server to recognize PHP files, you must change the Web server configuration file to include some script aliases for PHP: ScriptAlias /php-bin/ /usr/local/php/bin AddType application/x-httpd-php .php Action application/x-httpd-php /php-bin/php The directory where PHP is installed may differ. PHP as a CGI-BIN program The next example shows a traditional HTML form that uses the Action tag to invoke a CGI-BIN program when a user clicks the Submit button. In this example, the CGI-BIN program is actually a PHP program that processes the fields in the HTML form and uses that information to query a DB2 database. The database we use is called SAMPLE. SAMPLE is actually shipped with V5R1. To create it, follow the instructions at “Creating a sample database” on page 16 in this redpaper. Figure 3 shows the basic HTML form that we use to perform a database query. Our system name is LPAR3NVM. Availability and compatibility You must obtain PHP from these instructions and have AIX skills to compile PHP as new versions come out. Linux will be most compatible with new versions of PHP as they come out. mySQL mySQL is unavailable in PASE by default. You must download and compile it if desired. mySQL is available as an alternative database (it's fairly common to use mySQL with PHP applications). Web server module PHP cannot be a Web server module. It must be a CGI-bin process only. This matters only in extremely performancecritical Web sites. PHP can be installed as an Apache module. Database An ODBC driver is not necessary. To use iSeries DB2 UDB, you must download, install, and configure iSeries ODBC Driver for Linux. This UNIX-based ODBC is free from IBM. It uses sockets to communicate between the Linux LPAR and the iSeries LPAR. Factors to consider PHP in PASE and Apache PHP in Linux LPAR Bringing PHP to Your IBM iSeries Server 7 Figure 3 Basic HTML form used to perform a database query Figure 4 shows the results of our query. Each record returned has been placed in a table row. Figure 4 Results of the query 8 Bringing PHP to Your IBM ^ iSeries Server Here is the dbqueryphp.php script where the actual work is done:
PHP DB Query Tester
Query performed:
Results:
Error ".odbc_error().":".odbc_errormsg().""); elseif (odbc_num_rows($result)==0): echo("Query ran successfully"); else: ?> ".odbc_field_name($result,$i+1).""); } ?> "); for ($j =0;$j ".$row_array [$j ] . ""); } echo(""); } echo("
"); endif; } elseif ($host || $database || $query) { echo("All three fields must be filled in for a query
"); } else { echo("Use PHP to run an SQL Query on an iSeries database:
"); } ?>

Bringing PHP to Your IBM iSeries Server 9
iSeries Host:

Library of the database to query:

Please enter the SQL query to be run:

View PHP Source for this Query The highlights include:  odbc_connect: This is the “Open” of the database. The link variable is used by other odbc functions later in the script.  odbc_exec: The variable filled in on the HTML form contains the string that we will run as an SQL statement. odbc_exe runs the SQL statement and returns results in the $result variable.  odbc_numfields: A function determines how many columns are returned for this record. We use this value to put HTML tags around each cell. Another PHP script For one additional PHP example, let us include a script that will work only in the PASE version of PHP. This example takes advantage of the fact that the PASE “system” command writes any spooled output, produced by a command, to standard output. That is, you can run any commands with an OUTPUT(*PRINT) parameter in the PASE shell and have the results sent to STDOUT. For example, if you're on the PASE command line QP2TERM, you can type the command system wrkactjob (Work with Active Jobs (WRKACTJOB)) and see the results as they scroll across the screen. Our example, phpactjob, simply formats this output into an HTML table. Figure 5 shows the output of this script. 10 Bringing PHP to Your IBM ^ iSeries Server Figure 5 PHP formats the result of Work with Active Jobs (WRKACTJOB) at an HTML table Here is the phpactjob source code. Note that we use back quotation marks (``) to run the command Work with Active Jobs (WRKACTJOB) and capture the output. This output is then broken into lines by searching for the new line character “\n” using the strtok function of PHP.

PHPACTJOB Test PHP with WRKACTJOB Output

PHP Running WRKACTJOB in PASE
"; print ""; print "

";
print $line;
print "
"; print ""; while ($line = strtok("\n")) { print ""; print "
$line
"; print ""; } print ""; print "thend"; ?> Bringing PHP to Your IBM iSeries Server 11 A starting point You have been introduced to PHP and how to use it on the iSeries server. Use this article as a starting point to find other examples and documentation that can have you running PHP in no time.  Probably the best place to find help, tutorials, and examples about PHP is the PHP project Web site at: http://www.php.net  You can also check out the PASE on iSeries PartnerWorld® Web site at: http://www-919.ibm.com®/developer/factory/pase/overview.html  Also see the iSeries Linux home page at: http://www.iseries.ibm.com/linux  You can find a neat demo application showing PHP using ODBC Driver for Linux at the following Web site: http://www-1.ibm.com/servers/eserver/iseries/linux/odbc/guide/demoindex.html This demo includes PHP using binary large objects (BLOBs) that contain employee photos in the sample iSeries EMPLOYEE database. Beware of PHP bugs Recently, a few security holes were discovered in PHP. The most recently discovered one involves the code for handling file uploads. This flaw lets hackers easily crash the PHP server and possibly take it over remotely. The flaw affects PHP versions 4.2.0 and 4.2.1. CERT rates the problem as critical. The PHP Group announced a fix release, version 4.2.2, that all PHP users employing PHP's file-upload facility should install immediately. The fix is available on the Web at: http://www.php.net/release_4_2_2.php Prerequisites This IBM Redpaper assumes that you have the following hardware and software on your iSeries server:  5722-SS1: OS/400 (5722-SS1) at V5R2 (though the same basic steps should work on an iSeries server at V5R1)  5722-SS1 Option 13: OS/400 - System Openness Includes  5722-SS1 Option 33: OS/400 PASE Note: We assume for this document that you are running at V5R2 of OS/400. If you have OS/400 V5R2 then all you must do is to make sure that 5722-SS1 Option 33 OS/400 PASE has been installed. Since OS/400 V5R1 supports some levels of AS/400 hardware that is not supported by PASE (PASE requires a certain version (level) of PowerPC® processor) you must first determine if your AS/400 hardware supports PASE. You can find a detailed list of processors on which PASE can run on the Web at: http://www.as400.ibm.com/developer/factory/pase/ehardware.html 12 Bringing PHP to Your IBM ^ iSeries Server  5722-DG1: IBM HTTP Server for iSeries. This LPP contains the HTTP Server (powered by Apache), which is the only HTTP server for which PHP will work. Also, install the latest Apache group PTF package. For V5R2, the group PTF package number is SF99098.  The command make A make command can be found in OS/400’s PASE for V5R2. If you are using V5R1 of OS/400 then you will have to download the make command. We suggest using the GNU make command that can download from http://www.gnu.org/directory/gnu/make.html  5799-PTL: (Optional) PRPQ iSeries Tools for Developers. This tool kit is optional for this work but you may find it useful for some other similar projects. For details see http://www.iseries.ibm.com/developer/factory/tools This Redpaper also assumes that you have the following hardware and software on your build machine. The build machine could be either a separate pSeries™ running AIX or an iSeries running OS/400 with the following software:  The command patch If you do not have a patch program on your system, try the GNU patch. The GNU patch program is usually not on AIX or OS/400 machines. You can download version 2.5 (not 2.5.4) from: ftp://ftp.gnu.org/pub/gnu/patch To compile the source, follow these steps: a. Untar the source using the tar command. b. Type cd to go to the directory. c. Perform a ./configure. d. Run the make command. e. Run make install.  The GNU command gzip command to compresses and decompresses files. You can download this from: http://www.gnu.org/directory/GNU/gzip.html  The VisualAge C++ compiler for AIX. You can find information about this compiler at: http://www.ibm.com/software/ad/vacpp/ If your build machine will be AIX (not OS/400) you must match the AIX version to the target OS/400 PASE version. That is, the application binary created on AIX needs to be compatible with the version of OS/400 PASE that you want to the application to run in. To help you plan this issue see: http://publib.boulder.ibm.com/iseries/v5r2/ic2924/info/rzalf/rzalfplanning.htm We have tested these instructions on AIX 4.3 and newer. Alternatively, V5R2 of OS/400 PASE now supports installation of either the IBM VisualAge C++ Professional for AIX Version 6.0 or the IBM C for AIX Version 6.0 software products. This means you can compile OS/400 PASE applications within OS/400 PASE. A separate AIX system is not required. IBM VisualAge C++ Professional for AIX Version 6.0 (5765-F56) and IBM C for AIX (5765-F57) are separately available program products from IBM. Note that the VisualAge C++ Professional for AIX compiler product also includes the C for AIX compiler product. Bringing PHP to Your IBM iSeries Server 13 Installation instructions Follow these steps to download and prepare the PHP source files for compile. Downloading PHP Download the version of PHP you need for your iSeries. 1. Download the tar file php-4.3.0.tar.gz for PHP 4.3.0 from the following Web site: http://www.php.net 2. Using FTP, send this file to the machine on which you will build PHP. This may be the AIX machine or the iSeries machine with the VisualAge compiler. We will call this your build machine. 3. Untar the file by using the following commands: gunzip php-4.3.0.tar.gz tar -xvf php-4.3.0.tar Patching the source code file A patch is required to run PHP on the iSeries. We included patch files for both the 4.3.0 and the older 4.2.2 versions of PHP. The patch changes the default PHP DB2 support from AIX DB2 to OS/400 DB2. 1. Download and save the patch file to the build machine. You can find the file on the ITSO home page at: http://www.redbooks.ibm.com/ Click Additional materials to access the directory listing of additional materials to download. Open the directory REDP3639, in which you will find the files php422pase.patch and php430pase.patch. Download the patch file into the same directory from which you ran the tar command. 2. Change directory (cd) to that directory and run the following patch command: patch -p0 < php430pase.patch Locating iSeries specific files You must locate and bring to your build machine the following iSeries files:  The sqlcli.h and the libdb400.exp files contain DB2 UDB AS/400 support.  The as400_libc.exp file is an iSeries-specific extension to the AIX file libc.a. This file is part of 5722-SS1 Option 13 OS/400 - System Openness Includes. Follow these instructions to obtain the files from your iSeries: 1. Enter the following command: CPY OBJ('/QIBM/include/sqlcli.h') TODIR('/home/yourid') TOCCSID(*STDASCII) DTAFMT(*TEXT) Note: At the time of this writing an offer for free 60-day trial version of VisualAge C++ Professional for AIX, V6.0 is now available for download. See: http://www14.software.ibm.com/webapp/download/search.jsp?go=y&rs=vacpp Note: We include the patch files for both the 4.3.0 and the older 4.2.2 versions of PHP. These instructions, however, are written for version 4.3.0. 14 Bringing PHP to Your IBM ^ iSeries Server 2. Using FTP, send the /home/yourid/sqlcli.h file from your iSeries to the build machine. 3. Send, using FTP, the libdb400.exp and as400_libc.exp files from the iSeries directory /QOpenSys/QIBM/ProdData/OS400/PASE/lib to the AIX build machine. Preparing for the PHP compile Follow these steps to prepare the files and directories needed for the successful compile of PHP on your build machine. These steps are assuming ksh. 1. Set the CFLAGS, CC, and CPPFLAGS environment variables as follows. export CFLAGS='-ma -DPASE -I /home/yourid -bI:/home/yourid/libdb400.exp -bI:/home/yourid/path/as400_libc.exp' export CC=xlc export CPPFLAGS=-qflag=e:e 2. Change to the php-4.3.0 directory using the cd command. 3. Run the following command: ./configure --with-ibm-db2 \ --with-config-file-path=/QOpenSys/php/etc \ --prefix=/QOpenSys/php/ \ --enable-force-cgi-redirect \ --without-mysql 4. If you are compiling directly in PASE on iSeries, add the following configure flags (be sure to add a “\” to the end of the previous line): --build=powerpc-ibm-aix4.3.3.0 \ --host=powerpc-ibm-aix4.3.3.0 The configure should take some time to run. After it finishes, you need to make final adjustments to the files listed in the following steps. 5. Edit the Makefile: remove -ldb2 from ODBC_LIB remove -ldb2 from EXTRA_LIBS 6. Edit the config_vars.mk file: remove -ldb2 from ODBC_LIBS remove -ldb2 -lbind from EXTRA_LIBS 7. Edit the main/build-defs.h file: remove -ldb2 from PHP_ODBC_LIBS 8. Edit the main/php_config.h file: Delete #define HAVE_MMAP 1 Delete #define HAVE_SETITIMER 1 Note: The flags for -I and -bI are the uppercase format of the letter “i”. Note: The Makefile is generated with lines greater than 2048 characters. Some editors, such as vi, cannot handle the long lines, so you need to use a different editor. FTP the Makefile to a different machine and back if necessary. Note: PHP version 4.3.0 does not have a config_vars.mk. This step is for PHP version 4.2.2 only. Bringing PHP to Your IBM iSeries Server 15 If you run this on a V5R1 OS/400 server, use the following command: Delete #define HAVE_STATVFS 1 Delete #define HAVE_PREAD 1 Delete #define HAVE_PWRITE 1 Compiling (make) You have two choices depending on whether you are compiling in AIX on the pSeries or in PASE on the iSeries. If you are compiling in PASE on the iSeries Follow these steps if you are compiling the PHP source code on your iSeries: make make install mkdir /QOpenSys/php/etc cp php.ini-dist /QOpenSys/php/etc/php.ini This installs and puts all the files in the correct directory. You need write access to the /QOpenSys directory. At this point, you may skip to “Testing PHP” on page 16. If you are compiling in AIX on the pSeries Follow these steps if you are compiling the PHP source code on your pSeries: 1. Edit the Makefile (see the note in step 5 on page 14 about the long lines of the Makefile) for the line "install_targets =",remove "install-pear". 2. Enter the following commands in the order shown: mkdir /tmp/QOpenSys 3. At the AIX prompt, run the following commands: make make install INSTALL_ROOT=/tmp/ This installs PHP into /tmp/QOpenSys/php. 4. Enter the following commands in the order shown: mkdir /tmp/QOpenSys/php/etc cp php.ini-dist /tmp/QOpenSys/php/etc/php.ini 5. Edit the Makefile (see the note in step 5 on page 14 about the long lines of the Makefile) for the line "install_targets =",add "install-pear". If the location of your home directory on your AIX box is different than the location of your home directory in PASE (for example, on AIX your home directory is /usr/home/usr4/jdoe and on PASE it is /home/john), replace all occurrences of "/usr/home/usr4/jdoe/" to "/home/john/" in the Makefile. Make sure that you include the first and last “/” so you don't lose your directory separator. 6. Enter the following commands in the order shown: cd /tmp tar -cvf ~/php430pasebin.tar QOpenSys cd ~ tar -cvf php430pasesrc.tar php-4.3.0 16 Bringing PHP to Your IBM ^ iSeries Server 7. Using FTP, send both php430pasebin.tar and php430pasesrc.tar to your home directory on the iSeries server. 8. Enter the following commands in the order shown: cd / tar -xvf ~/php430pasebin.tar cd ~ tar -xvf php430pasesrc.tar cd php-4.3.0 make install-pear Testing PHP From the PASE shell in OS/400, run the command: /QOpenSys/php/bin/php -v This should tell you the version of PHP you have. Configuring HTTP Server (powered by Apache) to use PHP Edit the file httpd.conf using the Apache GUI interface. The key statements needed are: ScriptAlias /php-bin/ /QOpenSys/php/bin/ AddType application/x-httpd-php .php Action application/x-httpd-php /php-bin/php Options +ExecCGI order allow,deny allow from all Stop and start your HTTP Server (powered by Apache) Web server. Creating a sample database Here we could add the creation of the sample database as supplied with the system. Starting in V5R1, a sample database is shipped with the system. This is explained on the Web at: http://www.ibm.com/servers/eserver/iseries/db2/sqldata.htm Note: The following steps are all done in PASE on the iSeries and not in AIX. Note: If you try running the PHP binary in PASE and it dies with an illegal instruction, check for the existence of a job log. Several things can cause an illegal instruction signal and kill a PASE application. If the illegal instruction was caused by an unsupported system call, the name of the system call will be specified in the job log. The job log should tell you the name of the illegal instruction. Next find the corresponding HAVE_ line in the php_config.h and delete it. Then recompile. This should only happen if you're compiling on a version of AIX that supports certain syscalls that PASE does not support (in addition to the five noted earlier). Bringing PHP to Your IBM iSeries Server 17 To unpack and create the sample database, invoke the procedure from any SQL interface as follows: CALL QSYS.CREATE_SQL_SAMPLE('SAMPLE') Here SAMPLE is the name of the schema that you want to create. However, currently the sample PHP requires some updates. For example, PASE PHP runs as a CGI-BIN and cannot use the $_SERVER ['PHP_AUTH_USER'] and $_SERVER ['PHP_AUTH_PW'] values. Also, when connecting to a database, you may normally use something like this example: $dsn = "DRIVER=iSeries Access ODBC Driver;SYSTEM=$isdb_system;DBQ=$isdb_database"; $db = odbc_connect($dsn, $user, $password); But in PASE, you use something like this example: $db = odbc_connect($isdb_system, "", ""); odbc_setoption($db, 1, SQL_ATTR_DBC_DEFAULT_LIB, $isdb_database); Limitations Since PHP runs as a CGI application and not as an Apache module, some things will not work in this implementation on the iSeries:  HTTP authentication will not work, so any script that tries using the variables $_SERVER['PHP_AUTH_USER'] and $_SERVER['PHP_AUTH_PW'] will not work. You need to use user accounts and make a form that gets the user name and password and sets a cookie instead.  PHP_SELF does not work. There is a bug in the CGI version of PHP 4.3.0 that corrupts the $_SERVER['PHP_SELF'] variable. For more details on this bug, see PHP's bug page at: http://bugs.php.net/bug.php?id=21261 By the time you read this, that page may have a patch that will fix the issue. If it does, then apply the patch. If it doesn't, use the fix suggested by mailto:tapken@engter.de in the bug report: Create a file called “self_fix.php” in /QOpenSys/php/lib/php/ with the following script: Then in /QOpenSys/php/etc/php.ini, look for the line that says: auto_prepend_file = Change this line to: auto_prepend_file = self_fix.php This should fix the $_SERVER['PHP_SELF'] bug. Note: It does not matter which user ID and password are used when you connect to the ODBC database. It uses the authority of the user profile that is running the web server process. Use the ServerUserID directive in the Apache config to change this. It is actually somewhat of a security hole if you allow others to make a Web page and do not configure the Apache Web server to run the under a different user. 18 Bringing PHP to Your IBM ^ iSeries Server PHP 4.2.2 errata The biggest change from 4.2.2 to 4.3.0 was the configuration process. For this document to apply to 4.2.2, make the following changes to the steps listed previously as noted:  For “Locating iSeries specific files” on page 13: – Step 5 on page 14: Ignore the note about the long line Makefile because it does not exist. – Step 6 on page 14: PHP version 4.3.0 does not have config_vars.mk. This step is for PHP version 4.2.2 only.  For “If you are compiling in AIX on the pSeries” on page 15, follow these steps instead. This is because it does not use PHP itself to try to install PEAR. cd php-4.2.2 make cd .. tar -cvf php422pasesrc.tar php-4.2.2 FTP the tar file to PASE The following steps are all done in OS/400 PASE on the iSeries and not in AIX: cd ~ tar -xvf php422pasesrc.tar cd php-4.2.2 make install The team that wrote this Redpaper The following team created this IBM Redpaper: David Larson is a staff software engineer for IBM in Rochester, Minnesota. His career began at IBM enhancing the functionality of OS/400 PASE and assisting groups inside and outside of IBM in using OS/400 PASE. His recent projects include OS/400 PASE integration with the OS/400 JVM, virtual device drivers for OS/400 and Linux, and hypervisor development. Bryan Logan is a software engineer for IBM Rochester. He is currently on the OS/400 PASE development team. You can contact him by e-mail at: mailto:bryanlog@us.ibm.com Tim Massaro is an advisory programmer for IBM in Rochester, Minnesota. Tim's career includes stints on several S/38 and AS/400 teams, including Work Management, Message Handler, and Operational Assistant. Tim is currently on the DEV/2000 Tools Team. You can reach him by e-mail at: mailto:tmassaro@us.ibm.com Heiko Mueller is a member of the Communications and the OS team for IBM Austria. Prior to joining this team, he worked for a German company delivering support for the iSeries platform. He has passed the certification exams for WebSphere® Solution Designer and iSeries Professional System Operator and spent much of his spare time improving his skills in Java programming, Linux, and AIX. You can reach Heiko by e-mail at: mailto:heiko_mueller@at.ibm.com Brian R. Smith is a Sr. Consulting I/T Specialist in the IBM International Technical Support Organization (ITSO) Rochester Center. The first third of his career was spent in the Rochester Lab with the design, coding, and testing of the System/38™ and AS/400 in the area of communications. He then jumped the wall into technical marketing support in 1990 to pursue the life of teaching and writing. Brian is the team leader of the iSeries e-business team at the ITSO Rochester Center. You can reach Brian via e-mail at: mailto:brsmith@us.ibm.com © Copyright IBM Corp. 2003. All rights reserved. 19 Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces. 20 Bring PHP to Your IBM eServer iSeries Server This document created or updated on April 30, 2003. Send us your comments in one of the following ways:  Use the online Contact us review redbook form found at: ibm.com/redbooks  Send your comments in an Internet note to: redbook@us.ibm.com  Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 U.S.A. Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: ^™ ™ Redbooks(logo) ™ ibm.com® iSeries™ pSeries™ AIX® AS/400® DB2 Universal Database™ DB2® IBM® Net.Data® OS/400® PartnerWorld® PowerPC® System/38™ VisualAge® WebSphere® The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. ®

Redpaper ibm.com/redbooks High Availability On the AS/400 System A System Manager’s Guide Susan Powers Nick Harris Ellen Dreyer Andersen Sue Baker David Mee Tools and solutions to improve your highly available AS/400 system Components for a successful high availability system Hardware options for high availability Front cover International Technical Support Organization High Availability On the AS/400 System: A System Manager’s Guide June 2001 © Copyright International Business Machines Corporation 2001. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. First Edition (June 2001) This edition applies to Version 4, Release Number 5 of OS/400 product number 5769-SS1. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Before using this information and the product it supports, be sure to read the general information in Appendix H, “Special notices” on page 183. Take Note! iii Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Part 1. What is high availability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Chapter 1. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.1 When to consider a high availability solution . . . . . . . . . . . . . . . . . . . . . . . .3 1.1.1 What a high availability solution is. . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.2 What high availability is. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.2.1 Levels of availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 1.3 Determining your availability requirements . . . . . . . . . . . . . . . . . . . . . . . . .7 1.4 Determining how high you need to go . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 1.5 Estimating the value of availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.6 iSeries factors for maximum availability . . . . . . . . . . . . . . . . . . . . . . . . . . .9 1.7 Scheduled versus unscheduled outage . . . . . . . . . . . . . . . . . . . . . . . . . . .10 1.7.1 Scheduled outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 1.7.2 Unscheduled outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 1.8 Comparison to planned preventive maintenance (PPM) . . . . . . . . . . . . . .11 1.9 Other availability definition considerations . . . . . . . . . . . . . . . . . . . . . . . .12 Chapter 2. Developing an availability plan . . . . . . . . . . . . . . . . . . . . . . . . .15 2.1 The business plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 2.1.1 Project scope and goal definition. . . . . . . . . . . . . . . . . . . . . . . . . . . .16 2.2 Human resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 2.2.1 Project organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 2.3 Communication and sponsorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 2.4 Service level agreements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.5 Third party contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.5.1 Application providers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 2.5.2 Operating system provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.5.3 Hardware providers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.5.4 Peripheral equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.5.5 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 2.6 Verifying the implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 2.6.1 Documenting the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 2.7 Rollout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Chapter 3. High availability example solutions . . . . . . . . . . . . . . . . . . . . . .25 3.1 A high availability customer: Scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . .25 3.2 A large financial institution: Scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . .26 3.2.1 Benefits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 3.3 A large retail company: Scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 3.4 A small manufacturing company: Scenario 4. . . . . . . . . . . . . . . . . . . . . . .29 3.5 A distribution company: Scenario 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Part 2. AS/400 high availability functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 Chapter 4. Hardware support for single system high availability . . . . . . .35 4.1 Protecting your data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 iv High Availability on the AS/400 System: A System Manager’s Guide 4.2 Disk protection tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3 Disk mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.3.1 Standard mirrored protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.3.2 Mirrored protection: Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.3.3 Mirrored protection: Costs and limitations . . . . . . . . . . . . . . . . . . . . 39 4.3.4 Determining the level of mirrored protection. . . . . . . . . . . . . . . . . . . 40 4.3.5 Determining the hardware required for mirroring . . . . . . . . . . . . . . . 44 4.3.6 Mirroring and performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.7 Determining the extra hardware required for performance . . . . . . . . 46 4.4 Remote DASD mirroring support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.4.1 Remote load source mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4.2 Enabling remote load source mirroring. . . . . . . . . . . . . . . . . . . . . . . 47 4.4.3 Using remote load source mirroring with local DASD . . . . . . . . . . . . 47 4.5 Planning your mirroring installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5.1 Comparing DASD management with standard and remote mirroring 49 4.6 Device parity protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.6.1 How device parity protection affects performance . . . . . . . . . . . . . . 51 4.6.2 Using both device parity protection and mirrored protection . . . . . . . 52 4.7 Comparing the disk protection options . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.8 Concurrent maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.9 Redundancy and hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.10 OptiConnect: Extending a single system . . . . . . . . . . . . . . . . . . . . . . . . 55 4.11 Cluster support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.12 LPAR hardware perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.12.1 Clustering with LPAR support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.13 UPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.14 Battery backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.15 Continuously powered main storage . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.16 Tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.16.1 Alternate installation device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Chapter 5. Auxiliary storage pools (ASPs). . . . . . . . . . . . . . . . . . . . . . . . . 63 5.1 Deciding which ASPs to protect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.1.1 Determining the disk units needed . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.2 Assigning disk units to ASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.3 Using ASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.3.1 Using ASPs for availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.3.2 Using ASPs to dedicate resources or improve performance. . . . . . . 66 5.3.3 Using ASPs with document library objects . . . . . . . . . . . . . . . . . . . . 67 5.3.4 Using ASPs with extensive journaling . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.5 Using ASPs with access path journaling . . . . . . . . . . . . . . . . . . . . . 68 5.3.6 Creating a new ASP on an active system. . . . . . . . . . . . . . . . . . . . . 68 5.3.7 Making sure that your system has enough working space . . . . . . . . 69 5.3.8 Auxiliary storage pools: Example uses. . . . . . . . . . . . . . . . . . . . . . . 69 5.3.9 Auxiliary storage pools: Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.3.10 Auxiliary storage pools: Costs and limitations . . . . . . . . . . . . . . . . 70 5.4 System ASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.1 Capacity of the system ASP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.2 Protecting your system ASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.5 User ASPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.5.1 Library user ASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 v Chapter 6. Networking and high availability . . . . . . . . . . . . . . . . . . . . . . . .75 6.1 Network management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 6.2 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 6.3 Network components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 6.4 Testing and single point of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 6.5 Hardware switchover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 6.6 Network capacity and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81 6.7 HSA management considerations with networking . . . . . . . . . . . . . . . . . .81 6.7.1 Network support and considerations with a HAV application . . . . . . .82 6.8 Bus level interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82 6.8.1 Bus level interconnection and a high availability solution. . . . . . . . . .84 6.8.2 TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84 Chapter 7. OS/400: Built-in availability functions . . . . . . . . . . . . . . . . . . . .87 7.1 Basic OS/400 functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87 7.1.1 Journaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87 7.1.2 Journal receivers with a high availability business partner solution . .88 7.2 Commitment control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 7.2.1 Save-while-active with commitment control . . . . . . . . . . . . . . . . . . . .90 7.3 System Managed Access Path Protection (SMAPP) . . . . . . . . . . . . . . . . .90 7.4 Journal management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 7.4.1 Journal management: Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92 7.4.2 Journal management: Costs and limitations . . . . . . . . . . . . . . . . . . .92 7.5 Logical Partition (LPAR) support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93 7.6 Cluster support and OS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94 Chapter 8. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 8.1 Foundations for good performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 8.1.1 Symmetric multiprocessing (SMP). . . . . . . . . . . . . . . . . . . . . . . . . . .95 8.1.2 Interactive jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 8.1.3 Batch jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 8.1.4 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 8.2 Journaling: Adaptive bundling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97 8.2.1 Setting up the optimal hardware environment for journaling . . . . . . .98 8.2.2 Setting up your journals and journal receivers . . . . . . . . . . . . . . . . . .98 8.2.3 Application considerations and techniques of journaling . . . . . . . . . .99 8.3 Estimating the impact of journaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 8.3.1 Additional disk activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 8.3.2 Additional CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 8.3.3 Size of your journal auxiliary storage pool (ASP). . . . . . . . . . . . . . .100 8.4 Switchover and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101 Part 3. AS/400 high availability solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 Chapter 9. High availability solutions from IBM . . . . . . . . . . . . . . . . . . . .105 9.1 IBM DataPropogator/400. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 9.1.1 DataPropagator/400 description . . . . . . . . . . . . . . . . . . . . . . . . . . .106 9.1.2 DataPropagator/400 configuration. . . . . . . . . . . . . . . . . . . . . . . . . .107 9.1.3 Data replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 9.1.4 OptiConnect and DataPropagator/400. . . . . . . . . . . . . . . . . . . . . . .108 9.1.5 Remote journals and DataPropagator/400. . . . . . . . . . . . . . . . . . . .109 9.1.6 DataPropagator/400 implementation . . . . . . . . . . . . . . . . . . . . . . . .109 9.1.7 More information about DataPropagator/400 . . . . . . . . . . . . . . . . . .109 vi High Availability on the AS/400 System: A System Manager’s Guide Chapter 10. High availability business partner solutions . . . . . . . . . . . . 111 10.1 DataMirror Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 10.1.1 DataMirror HA Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 10.1.2 ObjectMirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.1.3 SwitchOver System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.1.4 OptiConnect and DataMirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 10.1.5 Remote journals and DataMirror . . . . . . . . . . . . . . . . . . . . . . . . . 114 10.1.6 More information about DataMirror. . . . . . . . . . . . . . . . . . . . . . . . 114 10.2 Lakeview Technology solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 10.2.1 MIMIX/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 10.2.2 MIMIX/Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 10.2.3 MIMIX/Switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 10.2.4 MIMIX/Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 10.2.5 MIMIX/Promoter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 10.2.6 OptiConnect and MIMIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 10.2.7 More Information About Lakeview Technology . . . . . . . . . . . . . . . 119 10.3 Vision Solutions: About the company. . . . . . . . . . . . . . . . . . . . . . . . . . 119 10.3.1 Vision Solutions HAV solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 119 10.3.2 Vision Suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 10.3.3 OMS/400: Object Mirroring System . . . . . . . . . . . . . . . . . . . . . . . 123 10.3.4 ODS/400: Object Distribution System . . . . . . . . . . . . . . . . . . . . . 124 10.3.5 SAM/400: System Availability Monitor . . . . . . . . . . . . . . . . . . . . . 124 10.3.6 High Availability Services/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 10.3.7 More information about Vision Solutions, Inc. . . . . . . . . . . . . . . . 126 Chapter 11. Application design and considerations . . . . . . . . . . . . . . . . 127 11.1 Application coding for commitment control. . . . . . . . . . . . . . . . . . . . . . 127 11.2 Application checkpointing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 11.3 Application checkpoint techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 11.3.1 Historical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.4 Application scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.4.1 Single application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.4.2 CL program example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Chapter 12. Basic CL program model . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 12.1 Determining a job step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 12.1.1 Summary of the basic program architecture . . . . . . . . . . . . . . . . . 133 12.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.2.1 Distributed relational database. . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.2.2 Distributed database and DDM . . . . . . . . . . . . . . . . . . . . . . . . . . 134 12.3 Interactive jobs and user recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 12.4 Batch jobs and user recovery and special considerations . . . . . . . . . . 135 12.5 Server jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 12.6 Client Server jobs and user recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 137 12.7 Print job recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Part 4. High availability checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Appendix A. How your system manages auxiliary storage . . . . . . . . . . . .141 A.1 How disks are configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 A.2 Full protection: Single ASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 A.3 Full protection: Multiple ASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143 A.4 Partial protection: Multiple ASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144 vii Appendix B. Planning for device parity protection. . . . . . . . . . . . . . . . . . . 147 B.1 Mirrored protection and device parity protection to protect the system ASP. 147 B.2 Mirrored protection in the system ASP and device parity protection in the user ASPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 B.2.1 Mirrored protection and device parity protection in all ASPs. . . . . . . . . 149 B.2.2 Disk controller and the write-assist device . . . . . . . . . . . . . . . . . . . . . . 150 B.2.3 Mirrored protection: How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Appendix C. Batch Journal Caching for AS/400 boosts performance. . . 153 C.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 C.2 Benefits of the Batch Journal Caching PRPQ. . . . . . . . . . . . . . . . . . . . . . . . 153 C.2.1 Optimal journal performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 C.3 Installation considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 C.3.1 Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 C.3.2 Limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 C.3.3 For more information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Appendix D. Sample program to calculate journal size requirement . . . 155 D.1 ESTJRNSIZ CL program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 D.2 NJPFILS RPGLE program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 D.3 Externally described printer file: PFILRPT . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Appendix E. Comparing availability options . . . . . . . . . . . . . . . . . . . . . . . . 167 E.1 Journaling, mirroring, and device parity protection . . . . . . . . . . . . . . . . . . . . 167 E.2 Availability options by time to recover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Appendix F. Cost components of a business case. . . . . . . . . . . . . . . . . . . 169 F.1 Costs of availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 F.1.1 Hardware costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 F.1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 F.2 Value of availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 F.2.1 Lost business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 F.3 Image and publicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 F.4 Fines and penalties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 F.5 Staff costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 F.6 Impact on business decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 F.7 Source of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 F.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Appendix G. End-to-end checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 G.1 Business plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 G.1.1 Business operating hours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 G.2 High availability project planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 G.3 Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 G.4 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 G.4.1 Power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 G.4.2 Machine rooms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 G.4.3 Office building. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 G.4.4 Multiple sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 G.5 Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 G.6 Systems in current use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 G.6.1 Hardware inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 G.6.2 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 G.6.3 LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 viii High Availability on the AS/400 System: A System Manager’s Guide G.6.4 Backup strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 G.6.5 Operating systems version by system in use . . . . . . . . . . . . . . . . . . . .180 G.6.6 Operating system maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 G.6.7 Printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 G.7 Applications in current use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 G.7.1 Application operational hours current . . . . . . . . . . . . . . . . . . . . . . . . . .181 Appendix H. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183 © Copyright IBM Corp. 2001 ix Preface Availability and disaster recovery represents a billion dollar industry in the United States alone. Professional associations and institutes, such as the Association of Contingency Planners, Business Resumption Planning Association, Contingency Planning and Recovery Institute, and their associated journals and magazines are devoted to keeping an information system (and, therefore, the business) available to both internal and external business users. The growth of e-business further emphasizes the need to maintain system availability. Implementing a high availability solution is a complex task that requires diligent effort and a clear view of the objectives to be accomplished. The key to the process is planning and project management. This includes planning for an event, such as an outage, that may never occur, and project management with the discipline to dogmatically prepare, test, and perform for business resumption. Planning is paramount to the health of a highly available business. This Redpaper is intended to help organize the tasks and simplify the decisions involved in planning and implementing a high availability solution. While some of the most relevent items are covered, this Redpaper cannot cover all cases because every situation is unique. To assist IT managers with understanding the most important facts when planning to implement a high availability solution, detailed information is provided. This information can help business partners and IBMers to discuss high availability considerations with customers. In addition, this Redpaper provides examples of highly available solutions, the hardware involved in AS/400 availability solutions, and OS/400 operating system options that add to the reliability of the system in an availability environment. Application software and how it affects an availability solution are also discussed. Significant players in the solution are the business partners who provide the high availability middleware. In addion to discussing their products, a checklist is provided to help to establish a planning foundation. Note: A service offering is available from IBM for examining and recommending availability improvements. Contact your IBM marketing representative for further information. The team that wrote this Redpaper This Redpaper was produced by contributions from a team of specialists from around the world working at the International Technical Support Organization, Rochester Center. Susan Powers is a Senior I/T Specialist at the International Technical Support Organization, Rochester Center. Prior to joining the ITSO in 1997, she was an AS/400 Technical Advocate in the IBM Support Center specializing in a variety of communications, performance, and work management assignments. Her IBM career began as a Program Support Representative and Systems Engineer in Des Moines, Iowa. She holds a degree in mathematics, with an emphasis in x High Availability on the AS/400 System: A System Manager’s Guide education, from St. Mary’s College of Notre Dame. She served as the project leader for this redbook. Nick Harris is a Senior Systems Specialist for the AS/400 system in the International Technical Support Organization, Rochester Center. He specializes in server consolidation and the Integrated Netfinity Server. He writes and teaches IBM classes worldwide in areas of AS/400 system design, business intelligence, and database technology. He spent 11 years as a System Specialist in the United Kingdom AS/400 Business and has experience in S/36, S/38, and the AS/400 system. Nick served to outline the requirements and set much of the direction of this Redpaper. Ellen Dreyer Andersen is an Certified IT Specialist in Denmark. She has 21 years of experience working with the AS/400 and System/3x platforms. Since 1994, Ellen has specialized in AS/400e Systems Management with a special emphasis on performance, ADSM/400, and high availabiity solutions. Sue Baker is a Certified Consulting I/T Specialist working on the Advanced Technical Support team with IBM in Rochester. She has worked over 15 years with IBM mid-range system customers, in the industries of manufacturing, transportation, distribution, education, and telecommunications. She currently focuses on developing and implementing performance, capacity planning, and operations management techniques needed in the more complex multiple system and high availabiliy customer environment. David Mee is a Strategic Accounts Project Manager in the Global Strategic Accounts Group of Vision Solutions. He specializes in application and database design, as well as integration and implementation of high availability solutions worldwide. He has over 15 years of experience in IBM Midrange systems, and holds a computer science degree with additional certificates from UCI and UCLA in RPG, Cobol, Pascal, C and Visual Basic programming languages. He writes and teaches classes on high availability, mirroring, and application resiliency for Vision Solutions. Thanks to the following people for their invaluable contributions to this project: Steve Finnes, Project Sponsor Segment Manager, AS/400 Brand Bob Gintowt, RAS Architecture Availability/Recovery and Limits to Growth, IBM Rochester laboratory Fred L. Grunewald Vision Solutions, Inc. Glenn Van Benschoten, Director Product Marketing Lakeview Technology Michael Warkentin, Senior Product Specialist DataMirror xi Comments welcome Your comments are important to us! We want our Redpapers to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: • Use the online evaluation form found at http://www.redbooks.ibm.com/ • Send your comments in an Internet note to redbook@us.ibm.com xii High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 1 Part 1. What is high availability? Part 1 of this Redpaper discusses what high availability is. Levels of availability are discussed, as well as outage types, factors comprising an availability plan, and examples of high availability solutions. 2 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 3 Chapter 1. Background Early systems were considered available when they were up and running. As the demands of business, communications, and customer service grew, systems had to be up and running through the normal working day (usually 8 to 10 hours). Failures during this working period were not acceptable. In availability terms, this was a 5 x 8 service (five days at 8 hours each day). If a system was unavailable during this period, rapid recovery was necessary. Backups would be restored and the system and the database were inspected for integrity. This process could take days for larger databases. These ocurrences eventually led to the definition of availability. In general, availability means the amount of service disruption that is acceptable to the end user. This Redpaper provides insight into the challenges and choices a system manager may encounter when embarking on a project to make a business more highly available. This Redpaper does not provide a detailed technical setup of OS/400 or application products. This information is covered in other technical publications, for example the AS/400 Software Installation guide, SC41-5120. 1.1 When to consider a high availability solution When considering if a high availability solution is right for you, ask yourself these questions: • Will we benefit from using synchronized distributed databases? • Do our users need access to the AS/400 system 24 hours a day, 365 days a year? • Do our users operate in different time zones? • Is there enough time for nightly backups, scheduled maintenance, or installing new releases? • If our telephone sales application is not always up and running, will we lose our customers to the competition? • Is there a single point of failure for any data center? • Can we avoid the loss of data or access to the system in the event of a disaster or sabotage? • When the production machine is overloaded, can we move some users to a different machine for read only jobs? A high availability solution can benefit any of these situations. 1.1.1 What a high availability solution is High (or continuous) availability systems usually include an alternate system or CPU that mirrors some of the activity of the production system, and a fast communications link. These systems also include replication or mirroring software and enough DASD to handle the volume of data for a reasonable recovery time as shown in Figure 1 on page 4. 4 High Availability on the AS/400 System: A System Manager’s Guide Figure 1. The basics of a high availability solution Table 1 outlines some of the requirements of an HA solution. Tally these features to help simplify your investigation of high availability solutions. Table 1. Requirements of a high availability solution Features Solution A Solution B Solution C Data Propagator 24 x 7 availability No Eliminate downtime for backup and maintenance No Replication of database Yes Replication of other objects No Data replication to non-AS/400 systems Yes Handle unplanned outages No Automatically switch users to a target system No Workload distribution Yes Error recovery Yes Distribution to multiple AS/400 systems Yes Commitment control support Production Machine Mirroring to one more AS/400s Backup Machine 2 Backup Machine Background 5 Figure 2 illustrates a business operating with high availability. Figure 2. A highly available business Note the redundancy of communications links, servers, the distributed work environment, and the use of backup power. 1.2 What high availability is High availability is a much maligned phrase because it can have different meanings and is dependant on discussion variables. This Redpaper takes a holistic approach by discussing high availability as it applies to an organization, rather than an individual product or feature. This broadens the scope tremendously and prevents tunnel vision. Sync checks Yes Filtering of mirrored objects Yes (DB only) Execute remote commands No OptiConnect support Yes Utilize Remote Journals Yes Features Solution A Solution B Solution C Data Propagator Network Distribution site Manufacturing site Machine room B AC UPS AC UPS Application servers Machine room A Application servers Database servers Database servers Routers Routers Customers Suppliers Shop floor data collection WAN Network Network LAN Network Network 6 High Availability on the AS/400 System: A System Manager’s Guide High availability in this context states that an organization, or part of an organization, can respond to a third-party request using the organization’s normal business model. Normal is defined as including set levels of availability. Requests on the business can be anything, such as a sales call, an inventory inquiry, a credit check, or an invoice run. High availability is achieved by having an alternative system that replicates the availability of the production system. These systems are connected by high-speed communications. High availability software is used to achieve the replication. Chapter 10, “High availability business partner solutions” on page 111, discusses these solutions for the AS/400e system. 1.2.1 Levels of availability Information systems experience both planned and unplanned outages. Systems can be classified according to the degree to which they cope with different types of outages. Your system can be as available as it is planned or designed to be. Orient your implementation choices toward your desired level of availability. These levels include: • High availability: High availability relates to keeping an application available during planned business (service) hours. Systems provide high availability by delivering an acceptable or agreed upon level of service to the business during scheduled periods of operation. The system is protected in this high availability type of environment to recover from failures of major hardware components, such as a CPU, disks, and power supplies when an unplanned outage occurs. This involves redundancy of components to ensure that there is always an alternative available if something breaks. It also involves conducting thorough testing to ensure that any potential problems are detected before they can affect the production environment. • Continuous operations: Continuous operations means that a system can provide service to its users at all times without outages (planned or otherwise). A system that has implemented continuous operations is capable of operating 24 hours a day, 365 days a year with no scheduled outage. This does not imply that the system is highly available. An application can run 24 hours a day, 7 days a week and still be available only 95% of the time because of unscheduled outages. When unscheduled outages occur, they are typically short in duration and recovery actions are unnecessary or minimal. The prerequisite for continuous operations is that few or no changes can be made to the system. In a normal production environment, this is a very unlikely scenario. • Continuous availability: This type of availability is similar to continuous operations. Continuous availability is a combination of high availability and continuous operations. This means that the applications remain available across planned and unplanned system outages and must be implemented on system, application, and business levels. Continuous availability systems deliver an acceptable or agreed upon service 7 days a week, 24 hours a day. They add to availability provided by fault-tolerant systems by tolerating both planned and unplanned outages. With Background 7 continuous availability, you can avoid losing transactions. End users do not have to be aware that a failure or outage has occurred in the total environment. In reality, when people say they need continuous availability, they usually mean that they want the application to be available at all times during the agreed service hours, regardless of problems with, or changes to, the underlying hardware or software. What makes this more stringent than high availability is that the service hours get longer and longer, to the point that there is no time left for making changes to any of the system components. Note: The total environment consists of the computer, the network, workstations, applications, telephony, site facilities, and human resources. The levels of protection that a total environment offers depends on how many of these functions are wrapped into the integrated solution. 1.3 Determining your availability requirements When most people are first asked how much availability they require, they often reply that they want continuous availability. However, the high cost of continuous availability often makes such requirements unrealistic. The question usually comes down to how much availability someone can afford. There are not many applications that can justify the cost of 100% availability. The cost of availability increases dramatically as you get closer to 100%. Moving from 90% to 97%, for example, probably costs nothing more than better processes and practices, and very little in terms of additional hardware or software. Moving from 97% to 99.9% requires investing in the latest hardware technology, implementing very good processes and practices, and committing to staying current on software levels and maintenance. At the highest extreme, 99.9% availability equates to 8.9 hours downtime a year. 99.998% equates to just 10 minutes unplanned downtime a year. Removing that last ten minutes of downtime is likely to be more costly than moving from 99.9% to 99.998%. What may be more beneficial, and less expensive, is to address planned outages. In an IBM study, planned outages accounted for over 90% of all outages. Of the planned ones, about 40% are for hardware, software, network, or application changes. Appendix F, “Cost components of a business case” on page 169, helps you decide if the value of availability to the application justifies the expense. 1.4 Determining how high you need to go It was previously mentioned that the former version of availability translated into a customer’s access to the business. A 9 a.m. to 5 p.m. time frame is a valid form of availability requirement. However, today the terms “24 x 7” or “24 x 365” are more commonly used. Even high availability is often substituted by the term “continuous availability”. The term The 9s is also popular and means a 99% availability. On its face, this seems like a high requirement. However, if you analyze what this value means in business terms, it says that your process is available 361.35 days per year. In 8 High Availability on the AS/400 System: A System Manager’s Guide other words, for 3.65 days, the process is not available. This equates to 87.6 hours, which is a huge and unacceptable amount of time for some businesses. In recent press articles, up to $13,000 a minute has been quoted as a potential loss. This is a $68,328,000 a year lost potential. It is not difficult to justify a high availability solution when confronted by these sorts of numbers. However, it is one of the lessons to learn as well as an obstacle to block and tackle. The business can potentially lose this amount of money. When planning for a high availability solution, convincing the business to commit to even a fraction of this amount is difficult. However, use any recent unplanned or planned outages to estimate the cost. It is recommended that you start with a departmental analysis. For example, how much would it cost the business if the salespeople could not accept orders for two hours due to a system failure? To continue using the 9s, add .9% to your figure of 99%. This now equals 99.9% availability. Although this is obviously very close to 100%, it still equates to 8.76 hours. The AS/400e has been quoted as offering 99.94% availablility or 5.1 hours. This is according to a recent Gartner Group report, “AS/400e has the highest availability of any stand-alone, general business server: 99.94 percent” (“Platform Availability Data: Can You Spare a Minute?” Gartner Group, 10/98). This applies to the hardware and operating system and unplanned outages. It does not apply to application or scheduled outages. 1.5 Estimating the value of availability The higher the level of availability, the higher the investment required. It is important to have a good understanding of the dollar value that IT systems provide to the business, as well as the costs to the business if these systems are not available. This exercise can be time consuming and difficult when you consider the number of variables that exist within the company. Some companies delay the analysis. Once the value of the availability of your IT services is determined, you have an invaluable reference tool for establishing availability requirements, justifying appropriate investments in availability management solutions, and measuring returns on that investment. The estimation process should: • Analyze by major application or by services provided: The major cost of an outage is the cumulative total of not having the applications available to continue business. • Determine the value of system availability: It is not easy to determine the cost of outages. The inaccessibility of each application or program has varying effects on the productivity of its users. Start with a reasonable estimation of what each critical application is worth to the business. Some applications are critical throughout major portions of the day, while others can be run any time or on demand. • Look at direct versus indirect costs: Direct costs are the time and revenue lost directly because a system is down. Indirect costs are those incurred by another department or function as a result of an outage. For example, a Background 9 marketing department may absorb the cost of a manufacturing line being shut down because the system is unavailable. This is an indirect cost of the outage, but it is nonetheless a real cost to the company. • Consider tangible versus intangible costs: Tangible costs are direct and indirect costs that can be measured in dollars and cents. Intangible costs are those for which cash never changes hands, such as lost opportunity, good will, market share, and so on. • Analyze fixed versus variable costs: Fixed costs are direct, indirect, tangible, or intangible costs that result from a failure, regardless of the outage length. Variable costs are those that vary with the duration of the down time, but that are not necessarily directly proportional. For more detailed calculations and methodology, refer to So You Want to Estimate the Value of Availability, GG22-9318. 1.6 iSeries factors for maximum availability The IBM ~ iSeries and AS/400e systems are renown for their availability due to a number of factors: • Design for availability: A single iSeries server delivers an average of 99.9+% availability. According to data collected by IBM over the last two years, AS/400 and iSeries owners have experienced an average of less than nine hours of unplanned down time per year. Figure 3 indicates two out of three factors of unplanned outages that can be affected by proper design. IBM delivers a very reliable server because the IBM Development team designs, creates, builds, tests, and services the iSeries and AS/400 systems as a single entity. • Effective system management process: As noted in Figure 4, 90 percent available translates to 36 days. Lack of attention to system management disciplines and processes affect the availability achieved. Availability solutions, such as clusters, are undermined when system management processes are lacking or nonexistent. An effective system management strategy ties heavily into automation, such as an automatic archival of data, continuous system auditing, responding to security exposures, and monitoring error logs, backup and restores, and so on. A quote by Gartner Group puts this in perspective: By the year 2003, “100% availability will remain elusive as user-controlled disciplines have an 40.0% 40.0% 20.0% Hardware, disasters, power Operator Error Application Failure Figure 3. Unplanned outage factors Availability Percentage 99.9999 99.999 99.99 99.9 99 90 Total Outage Per Year 32 seconds 5 minutes 53 minutes 8.8 hours 87 hours (3.6 days) 876 hours (36 days) = = = = = = Figure 4. Unplanned outage factors 10 High Availability on the AS/400 System: A System Manager’s Guide ever-greater relative impact on achieving availability” (Gartner Group, June Conference, Dallas, Tx., 1997). An investment in system management disciplines, automation, and application recovery is necessary. Just a few additional hours of yearly downtime reduces availability from 99.99% availability to 99.9%. • Increase automation: Increased availability means a reduction in the possibility of errors and recovery delays, and an increase in consistency. Human errors can create more down time than hardware failures or power outages. More effective automation through the use of automation software and tools can help offset an overburdened staff and allow them to attend to more unique and critical decisions and tasks. As availability requirements increase, investments in automation must also increase. • Exploit availability techniques and applications designed for availability: Decrease unplanned outages and their effects by utilizing server availability options (for example, disk protection) and OS/400 functions, such as access path protection, file journaling, and user auxiliary storage pools (ASPs). Target a phased approach at increasing application resiliency (recoverability) and availability. As a general rule, an application on a non-clustered system is difficult to recover. A cluster solution cannot overcome a poor application design. Use applications that incorporate commitment control or other transaction methods, and use application recovery to decrease application recovery times and database integrity issues (incomplete transactions). You must use journaling and application recovery techniques to achieve high availability at a minimum. More sophisticated and highly available applications also use commitment control. Each technique is a building block foundation to all highly available environments. • Implement special solutions to meet your availability goals: To reach your availability goals, special solutions, such as iSeries or AS/400 clusters with monitoring, automatic switchover, and recovery automation, are implemented to control both planned and unplanned outages. If you sidestep the issues described above, even sophisticated options like clusters may not provide the highest possible availability levels. Small outages, such as recovering or reentering transactions, add up. 1.7 Scheduled versus unscheduled outage Many Information Technology departments have a good understanding of disaster recovery protection, which includes unscheduled outages. It can also involve anything from dual site installations to third-party vendors offering disaster recovery suites. Most of these installations provide protection from fires, floods, a tornado or an airplane crash on the site. The instance of the dangers affecting the site are very rare, but, if they do happen, they are catastrophic. Scheduled outages can impact a business’ financial well-being. Strong focus is warranted to plan for and minimize the time involved for scheduled outages. There has been little focus in this area, except for hardware and software vendors. Background 11 1.7.1 Scheduled outages A scheduled outage is a form of planned business unavailability. Scheduled outages include a production line shutdown for maintenance, the installation of a new PABX, resurfacing the car park, or the entire sales team leaving town for a convention. All of these can influence the smooth operation of the business and, therefore, the customers. In I/T terms, these outages may be due to a hardware upgrade, operating system maintenance or upgrade, an upgrade of an application system, network maintenance or improvements, a workstation maintenance or upgrade, or even a nightly backup. The focus of outage discussion is shifting from unplanned outages (disasters, breakdowns, floods, and the like), to planned outages (primarily nightly backups, but also upgrades, OS/400 updates, application updates, and so forth). A growing solution is to consolidate individual systems onto large systems, rather than install distributed systems in departments or subsidiaries. This suggests that you may have users in different time zones, which can further minimize or eliminate the time available for routine operations, such as a backup. With a high availability solution and mirrored systems, customers can perform their daily backup on the backup system and let the users and work be done on the production system at the same time. 1.7.2 Unscheduled outages Unscheduled outages include obscure occurrences. As mentioned earlier, these can include such things as fires, floods, storms, civil unrest, sabotage, and other assorted happenings. These outages are fairly well recognized and many organizations have disaster recovery plans in place to account for these types of occurrences. Testing these disaster plans should not be overlooked. We identify this in Appendix G, “End-to-end checklist” on page 175. 1.8 Comparison to planned preventive maintenance (PPM) When researching this Redpaper, the authors found that there are only a few publications that relate to high availability for Information Infrastructure within organizations. They then looked laterally and found some good sources regarding practices and processes in the engineering aspect. For many years, most manufacturing companies ran their businesses based on sound planned preventive maintenance programs. These programs cover the plant systems and services that support the manufactured product. Companies have long understood terms such as resiliency, availability, mean time to failure, and cost of failure. To define the planned preventive maintenance schedule for a production line is a very complex operation. Some simple examples of the high level tasks that need to be performed include: • Documenting a business plan: This includes business forecasts, current product forecasts, new product forecasts, and planned business outages (vacation, public holidays). 12 High Availability on the AS/400 System: A System Manager’s Guide • Documenting environmental issues: This includes the frequency of power failures, fires, storms, floods. This also includes civil issues, such as unrest, strikes, layoffs, and morale. • Documenting the processes used: This includes throughput, hours of operation, longevity of the product, and planned product changes. • Taking inventory of all parts involved in the process: This includes purchase costs, purchase dates, history, quality, meantime to failure, replacement cycles, and part availability. • Documenting the maintenance and replacement process • Estimating the cost for running the process and comparing it to the value of the product • Estimating the affordability of a planned preventive maintenance program • Documenting the resources required to manage the process: This includes job specifications, critical skills, and external skills. These tasks can be compared to the following tasks in the Information Management area. Imagine these refer to an application that supports one part of the business: • Documenting the business plan: This includes business forecasts, current business forecasts that the application supports, business growth in this application, planned business outages (vacation, public holidays). • Documenting environmental issues: This includes the frequency of power failures, fires, storms, floods. This also includes civil issues, such as unrest, strikes, layoffs, and morale. • Documenting the processes used: This includes throughput, hours of operation, longevity of the application, and planned application changes. • Taking inventory of all parts involved in the application: This includes purchase costs, purchase dates, history, quality, application failures, replacement cycles, hardware required to run the application, software maintenance schedules, software fix times, time required to implement ad hoc fixes, and developer availability. • Documenting the maintenance and replacement process • Estimating the cost for running the application and comparing it to the value of the business area • Estimating the afford ability of making the application highly available program • Documenting the resources needed to manage the application: This includes application specifications, job specifications, critical skills, and external skills. This information is parsed into smaller sizes that can be applied to any business. 1.9 Other availability definition considerations Designing a solution for high availability requires a practical knowledge of the business. The solution should fit the design of the business operations and structure. Does the organization have an accurate and approved business Background 13 process model? If it does, a major portion of the solution design is easy to plan. If there is no business process model, do not make assumptions at this stage. It is critical to get accurate information since this is the foundation of your plan. Identify the systems and end users of each part of the business process model. Once identified, you have the names of people to relate, in practical terms, just how high is high availability. 14 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 15 Chapter 2. Developing an availability plan For maximum availability, it is very important that planning is performed for the operating system, hardware, and database, as well as for applications and operational processes. As indicated in Figure 5, the level of availability is affected by products and use of technology. An unreliable system, poor systems management, security exposures, lack of automation, and applications that do not provide transaction recovery and restart capability weakens availability solutions. Achieving the highest possible level of availability is accomplished using clustering software, hardware solutions, and the planning and management of high availability products. An availability plan also needs to consider other factors that influence the end result, such as organizational and political issues. It is important to understand the challenges involved in each implementation phase to find the most appropriate tools and techniques for the customer environment. The need for higher availability can be relaxed by determining what a business actually needs and what is possible using the available technology. In reality, most applications can withstand some planned outage, either for batch work, backups, and reorganization of files, or to affect application changes. In most cases, the application is expected to be available at the host, or perhaps on the network that is owned and controlled by the organization. However, continuous availability in the environment outside the control of the organization (that is, on the Internet) is not normally included in the business case. This chapter explores the basic recommendations for planning for system availability. Start by looking at the needs of the business and the information necessary to support it. These actions are separated into the following areas: • Reviewing the business plan • Understanding the human resources issues • Dealing with third party contracts 2.1 The business plan This section gives the reader an insight into the information that is gathered from the business. Some of this is found in the business plan and some from investigating individual departments. Why do you need all this information? To make the business and application highly available, you must take a wider view than just maintaining access to the information (system). It is not enough to have the most highly available system if the telephone system fails and no one can contact your company. 2) Effective Systems Management 4) Application Design and Techniques 5) iSeries and AS/400 Clusters 1) Products, designed for reliability 3) AUTOMATION Level of Availability 100% C osts Figure 5. Essentials for maximum server availability 16 High Availability on the AS/400 System: A System Manager’s Guide When you gather this information or interview staff within the company, you may be questioned as to why you need this information. Some co-workers may deny access to the information itself while some co-workers may roadblock your access to the people with the information. In cases such as these, your executive sponsor plays a very important role. The roadblocks you find can be easily avoided with the sponsor’s help. 2.1.1 Project scope and goal definition The business case, the driving reason for the high availability project, can be used as the starting point for defining the goals and understanding the scope of the project. Requirements and goals should be gathered from any department or party affected by system management. The project team must prioritize the requirements according to their urgency, importance, and dependencies, and to be able to define the short and long term goals. Success criteria and completion criteria should be defined appropriately. Quantifiable results are essential. Even if it is an intangible benefit, such as customer satisfaction, you need to provide concrete methods to measure them. To verify the requirements and goals, several starting points can be used: • Look for established solutions for special requirements. • Find out whether it is possible to develop a solution in a reasonable amount of time and with a reasonable amount of resource. • Visit reference sites. • Discuss the requirements with experienced consulting teams. After this work is done, the first draft of the project definition is ready. Show it to the executive steering committee to get feedback as to whether it reflects their vision. The results from this meeting are the central theme for elaborating on the detailed project definition. Input for this step includes: • Requirements • All information • Type of politics • Vision The result is a clear and unambiguous definition of the scope and the goal definition. 2.2 Human resources Producing an availability plan is not a single-threaded process, and it can not be built on one person’s view. It is critical to build a team. The team should represent the needs of those they represent. Members must understand the department problems, be a voice for their challenges, be able to communicate their needs, and support an action plan. A good project group is critical to this type of planning project. The activities of the business are diverse, represented by members bringing a diverse set of needs and views regarding both the problems and solutions, Developing an availability plan 17 There will be some very tough times, long hours, and unease as the project progresses. Determination, empathy, and leadership are valued skills for the team makeup. Take a look at existing resources with an emphasis on your existing staff. In many cases, the skills required can be found within your staff. In other cases, it makes more sense to look outside the staff for the required talents. The final makeup of the team may represent both personnel inside and outside the company (or the department, if the study is of a more-focused nature). 2.2.1 Project organization The organization of the project team can be categorized in two parts: the members of the team, and the other organization parties involved. Many different skills are required on the team. Criteria for the suitability of each team member includes their skills as well as their availability and personal characteristics, such as how they work on a team, negotiation skills, and knowledge of the technical solution, as well as the workings of the company. A suitable project manager should be selected from within or outside the company. The project manager is the “face” of the project and is responsible for the total health of the project, ensuring that the objectives and goals are met in the highest quality fashion within the specified time schedule and budget. When the team is built, a lot of communication is necessary. Regular status meetings should be held to communicate and assess the actual status of the project, checkpoint the progress, make short-term decisions, and assess remaining activities. The sponsoring executives should be informed of the status in regular meetings. Consider providing an office workspace for the project team, and supply it with applicable reference material for the subject area and company. Sometimes management chooses a project manager from outside the company. Sometimes an organization expert joins the project team to provide the project manager with knowledge about the internal structures of the company. 2.3 Communication and sponsorship The support of the company’s management to the project is critical to its success. The ultimate responsibility for the project should lay with an executive steering committee, or an executive serving as a contact between the team and executives. The steering committee represents the management involvement and sponsorship. It can be used to solve complications with other parts of the organization, communicate expectations to affected company parties, and provide decision making power. The vision of the project scope and objectives should be made clear to everyone involved. This vision must be broadly communicated through the entire organization so that everyone can be aware of the consequences the project has for them. Reactions, especially negative, should be taken seriously and should be used to reach a clearer project vision. Pay special attention to communication because this can prompt people to bring their ideas into the project. 18 High Availability on the AS/400 System: A System Manager’s Guide Non-technical aspects should be emphasized in the project plan so that non-technical people involved can better adopt the project as their own. If it is appropriate, consider having a customer or supplier representative on the team. 2.4 Service level agreements Service level agreements are contracts with business departments within your company and, in many cases, businesses with whom you have an external contractual relationship. If your network resources are not up and running, you can not keep these commitments. Within your company, service requirements probably differ by department. For example, your accounting department may be able to tolerate down time of the accounting applications for up to one hour before it starts negatively impacting department operation. Likewise, marketing may agree to a calendar three-hour down time, but it can allow only an hour of down time on a customer-reference database. You make service level agreements to track your IT organization’s performance against business requirements. Based on these agreements, set priorities and allocate limited IT resources. By linking your IT department to your service desk, you can manage complex relationships among user problems, corporate assets, IT changes, and network events. System management can provide a centralized view of your current asset inventory. This enables your analysts to correctly analyze and resolve problems. Help desk analysts work with administrators to plan and manage the effects of such IT changes as deploying and upgrading applications. System management involves tracking, logging, and escalating user interactions and requests. 2.5 Third party contracts Service level agreements outside your company can mean a loss of business to a competitor if you can’t meet your commitments. This can have serious consequences for your company. Most organizations have contracts with external suppliers. However, the third parties are not all typically under the control of one department. As a Systems Manager implementing a high availability solution, you interface with many different aspects of your organization to gather the required information and possibly change the contracts to meet your new needs. Contractors or resources utilized from outside the company represents programmers, technicians, operators, or a variety of consultants. 2.5.1 Application providers The AS/400 system gets its name from Application System. The majority of AS/400 customers have one or many applications installed on their AS/400 system and the attached workstations. When reviewing the application, establish how the provider supports your business and determine the required level of support. This can range from a 9 a.m. to 5 p.m. telephone support provider contract, to employing developers Developing an availability plan 19 skilled with the particular application. The more critical the application, the higher level of skills that are required to perform problem determination that enables the fix to be resolved in the shortest possible time. The application provider should be able to define the rough release schedule for the application. This allows you to plan for application system updates. They should also be able to provide varying levels of fix support if the application fails in some way. Develop an escalation process for critical failures. Think through and document what steps to follow to recover from a critical failure. Note: This information applies to applications written outside the company. The same considerations apply for applications developed in-house. 2.5.2 Operating system provider Operating system suppliers are similar to application providers. However, the opportunity to enhance the operating system code typically occurs more frequently than application providers code. Enhancements to the OS/400 operating system typically occur with an annual frequency. Updates occur more frequently on application software. 2.5.3 Hardware providers Hardware provided by non-IBM distribution channels can include: • CPUs • Towers • Tapes • DASD • IOPs • Workstations • Printers Contracting with hardware providers is a relatively simple method to utilize resources outside the company. The contractor’s reliability is normally well known and high. Maintenance organizations are highly skilled and can provide high levels of service, depending on the cost. At minimum, the hours of coverage for support should match your planned availability requirements. Many large customers have arrangements for key supplies to be “warehoused” at the customer site. In other words, the supplies are owned by the maintainer but are stored at the customer site. Be careful when ordering new hardware. Order only with a goal that ensures availability of spare parts early in the product life. Demand for a product can be so great that there are no spare parts available. 2.5.4 Peripheral equipment The hardware components of a computing system go beyond the central processing unit and controllers. To reach end users, a computing system includes peripheral equipment, such as workstations, printers, routers, and modems. 20 High Availability on the AS/400 System: A System Manager’s Guide Maintaining peripheral equipment can be difficult. You must judge the benefits of maintaining parts and components on-site for fast replacement, compared to a repair contract for the main equipment or a combination of both. Consider the longevity of the peripherals. For example, when printers, displays, modems and such are replaced, it is not uncommon to update the technology. Sometimes, a replacement is made as an upgrade of functionality, either because the former model is withdrawn from sales and support, or the needs of the users require more function than the broken unit. It is not uncommon (perhaps even typical) for the replacement unit to provide equivalent or more function for less cost to the business. At first glance, replacing technology may not seem to be a big problem. For example, consider the case of a printer connected to a personal computer that is used by others within the network. Assume this departmental printer fails and is non-repairable. One solution is to simply replace the printer. However, after the new printer arrives and is connected, it needs a new driver loaded on the server operating system. The load of the new driver can raise a number of significant issues: • Is an IPL required to be recognized by the system? • Is a configuration required? • Do the applications work with the new driver? • What if the load causes the server to fail? This circumstance can result in driver loads across the whole network that reduce the hours of productive business. Therefore, even the smallest components of the total environment can have a major impact. It is advised that you plan for some redundancy, including when and how you carry out bulk replacements. 2.5.5 Facilities Facilities include machine rooms, power supplies, physical security, heating and air-conditioning, office space, ergonomics, and fire and smoke prevention systems to name but a few. The facilities that complete the total environment are as complex as the applications and computer hardware. 2.5.5.1 Site services It is important to work alongside the site facilities personnel and to understand contracts and service levels. For example, turning off the air conditioning for maintenance can potentially have a disastrous effect on the systems in the machine room. The contract with facilities should include spare air-conditioners or the placement of temporary mobile conditioners. 2.5.5.2 Machine rooms A good system manager has a well documented understanding of the machine rooms and the equipment within. The same redundancy requirements should be placed on critical services just as there are on computer systems. Developing an availability plan 21 2.5.5.3 UPS The use of Uninterruptible Power Supplies (UPS) has grown tremendously over the past ten years. The cost of a small UPS has reduced and they are now very affordable. When considering a UPS, keep in mind these three broad areas: • Machine Room: The machine room is a prime candidate for a UPS because it often requires continuous power. The ultimate solution is to generate your own power. Ideally, the national power supply serves only as a standby, with limited battery backup. Some customers do have this arrangement, but it is an expensive solution. There are simpler solutions that provide nearly the same level of service. Consider switching to a generated power environment. When the power fails, the standby generator starts. Unfortunately, there is a time lag before the system comes on line. Therefore, an interim battery backup is needed to support the systems while the generated power comes online. To register that the power has failed and then switch between battery, generator, and back to normal power requires a complicated and expensive switch. It may also require links to the systems to warn operators of the power failure. Another area that is often overlooked is the provision over power supplies for other equipment in the machine room, for example, consoles, printers, and air-conditioning. It is not enough to have a UPS for the system if there is no access to the console. In an extreme condition, you may be unable to shut the system down before the battery power fails if there is no UPS support for the console. • The site: When considering the site, you must look at all areas, for example: – If you have a disaster recovery service, is there space to park the recovery vehicle in the car park close to the building to attach network cabling? – In some buildings, access to the machine room is a problem and systems must be moved through windows via a crane because the lifts are too small or cannot take the weight. – Are there high availability facilities in the general office space, such as an emergency telephone with direct lines in case the PABX fails. This allows the business to continue even if the desk phones fail. When developing the availability plan, document these restrictions and plan for circumventions. • The workstation: Key users may need backup power to their workstation if there is no full standby generation. Looking at workstations from an ergonomic view, you may find potential issues that can result in long term unavailability of the human resources. An example is repetitive strain injury. This can severely impact your critical human resources in a company and cost a significant amount in litigation. It is worth investing time and money to solve these problems when planning for availability. 22 High Availability on the AS/400 System: A System Manager’s Guide 2.6 Verifying the implementation Verifying (by testing) the proposed high availability solution before putting it into production cannot be over-emphasized. Some of the activities involved in testing the implementation are explained in the following list (several areas of verifying, or testing, are involved): • Build a prototype: A prototype is a simulation of a live production. Develop a prototype to test the quality of the high availability solution. Make sure it can be easily reproduced. This lowers the risk of disturbing production during installation rollout. • Regression tests: Ensure that the replicated solution works in the same way as the prototype by performing a number of regression tests in the new environment. These same tests should be used in the production environment to ensure the quality of the final high availability implementation. • Disaster recovery test: Develop routines to ensure that a disaster recovery is smooth and as fast as possible. This requires a real crash test with detailed documentation of the steps required to get the system up and running, including how long it takes and where to find backup media and other required material. • Volume test: Few companies have the resources to build a test environment large enough to do realistic tests with production-like volumes. Some recovery centers and business partners offer the use of their environments for performing tests in larger volumes. It is an important step to help ensure that the system behaves as expected during the actual production. 2.6.1 Documenting the results To retest the systems after a problem has been fixed, it is important to have every test situation well documented. The documentation should include: • Hardware requirements • Software requirements • HAV requirements • A test case category • Tools required to perform the test • A list of steps to execute the test case • Results of the test (pass or fail) • The name of the individual executing the test case • The date on which the test case was executed • Notes taken while the case is executed • Anticipated results for each step of the test case • Actual results • Comments on general notes for each test case. • A list of any problem records opened in the event that the test was not successful 2.7 Rollout The testing sequence ends with the confirmed rollout of the solution. The rollout is another milestone in the overall project. For the first time, the production environment is actually affected. A well designed rollout strategy is crucial. Many things can impact the success of the rollout. Carefully check all prerequisites. Developing an availability plan 23 In a typical rollout, a great number of people are affected. Therefore, communication and training is important. For the rollout into production volumes, sufficient support is required. A minor incident can endanger the production rollout. The basic infrastructure of the company influences the rollout process. The situation differs depending on the industry area. The risk of problems increases with the number of external factors and the complexity of the system. The ideal case is a monogamous environment with all equipment owned by the company, including a high-performance network. All success factors can then be planned and controlled within individual departments. The main issue of the rollout is characterized by finding the appropriate rollout strategy. The project can begin in phases or all at once. A test scenario can validate the proper rollout. As the final proof of concept, a pilot rollout should be considered. Consider business hours and critical applications. Minimize the risk by taking controllable steps. The rollout can include down time for the production system. Therefore, timing must be negotiated with all concerned people. The rollout can take place inside or outside of business hours. Remember that some of the required prerequisites may not be available at these times. A project planning tool is helpful. Lists of resources, availability of resources, time restrictions, dependencies, and cost factors should be documented. Provide a calendar of activities, black-out dates, and milestones. 24 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 25 Chapter 3. High availability example solutions As companies increase their dependency on technology to deliver services and to process information, the risk of adverse consequences to earnings or capital from operational failures increases. If technology-related problems prevent a company from assessing its data, it may not be able to make payments on schedule, which would severely affect its business. Costly financial penalties may incur, the company’s reputation may suffer, and customers may not be able to deliver services or process information of their own. The company may even go out of business all together. Technology-related problems can increase: • Transaction risk: This arises from problems with service or product delivery • Strategic risk: This results from adverse business decisions or improper implementation of those decisions • Reputation risk: This has its source in negative public opinion • Compliance risk: This arises from violations of laws, rules, regulations, prescribed practices, or ethical standards Every customer has their own unique characteristics. The implementation of a high availability solution is customized to customer needs and budgets, while keeping in mind the risks of encountering and recovering from problems. This chapter describes examples of customer requirements and the high availability solutions they choose. 3.1 A high availability customer: Scenario 1 A Danish customer with 3,000 users on a large AS/400 system had difficulty finding time to complete tasks that required a dedicated system for maintenance. These tasks included such operations as performing nightly backups, installing new releases, and updating the hardware. One reason this challenge occurred is because the customer′s AS/400 system was serving all of their retail shops across Scandinavia. As these shops extended their hours, there was less and less time for planned system outages. They solved their problem by installing mirroring software on two AS/400 systems. To increase availability, the customer bought a second AS/400 system and connected the two machines with OptiConnect/400. Next, they installed mirroring software, and mirrored everything on the production machine to the backup machine. In the event of a planned or an unplanned system outage, the system users could switch to the backup machine in minutes. This solution made it easy for the shops to expand their hours and improve sales without losing system availability or sacrificing system maintenance. 26 High Availability on the AS/400 System: A System Manager’s Guide 3.2 A large financial institution: Scenario 2 Financial services typically require the most highly available solutions. Risks affect every aspect of banking, from interest rates the bank charges to the computers that process bank data. All banks want to avoid risk, but the risk of equipment failure and human error is possible in all systems. This risk may result from sources both within and beyond the bank’s control. Complete details of and a general outline of the SCAAA Bank’s new hot backup configuration are provided in the following list: • An original production AS/400e Model 740 in Madison • A Model 730 backup AS/400e at IBM’s Chicago Business Recovery Services Center • Both machines running the same level of OS/400 (V4R5) • High availability software applications transferring data and objects from the customer applications on the production machine to those on the backup machine • TCP/IP communications protocol used for communications with the BRS Center • A T1 line from the Madison branch to IBM’s POP server in Chicago, where IBM is responsible for communications from the POP server to the BRS center SAAA Bank of Madison, Wis., is no exception. SAAA specializes in commercial and institutional banking, and it provides comprehensive financial services to other banks, governments, corporations, and organizations. It has received AAA, Aaa and AAA ratings from Standard and Poor’s, Moody’s, and IBCA (Europe’s leading international credit rating agency), respectively. It has offices around the world, with over 5,200 employees woking in commercial centers worldwide. The Madison branch provides services to U.S. companies and subsidiaries of German corporations in North America. The Information Systems department of the Madison branch is intimately involved with the actual business of the bank, not just its computers. On a typical day, the workload consists of foreign exchange transactions, money market transactions, securities transactions, options transactions, and loans. The IS department sees every transaction that goes through and does the utmost to ensure that each one executes accurately and promptly. Given the profitability of the Madison branch, the IS department plays a major role in the success of the bank as a whole. The core data is largely stored on an IBM AS/400e system. Protecting the bank’s data means eliminating all single points of failure on that platform. Prior to 1997, the bank took the following steps to ensure business continuity: • Running regular backups, even making midday backups • Adding a redundant token-ring card to prevent system failure • Eliminating cabling problems by using an intelligent hub • Providing dual air conditioning systems to address environmental variables High availability example solutions 27 • Installing universal power supplies (UPSs) to permit system operations during electrical failure • Providing RAID5 to maintain parity information across multiple disks • Installing a second RAID controller to manage the array when the original controller fails • Leasing twenty seats at the IBM Business Recovery Services (BRS) Center in Sterling Forest, NY, so that the essential work of the bank can be carried out in the event of a disaster After recognizing the bank’s exposure to possible data loss, the bank chose MIMIX Availability Management Software. Running high availability software over a WAN to the BRS Center protects from all risks of data loss. The high availability software solution at SAAA Bank was modified to meet the demands of the bank’s daily schedule. This adjustment was necessary because the applications required journaling to be turned off at the end of the day to facilitate close-of-business processing. This is a characteristic that makes them somewhat difficult to replicate. The bank accomplished this through an elaborate process. During the day, the bank turned on AS/400e journaling capabilities, but at the end of the business day, the bank shut down MIMIX and the journal files for both systems. At about 6:30 p.m., the bank ran close-of-business processing on both machines simultaneously. This way their databases remained synchronized. About two hours later, when the processing was complete, the bank restarted journaling on both machines and MIMIX was brought back up. The next morning, high availability software started replicating the day’s transactions. The entire process was automated, but an operator was always available in case of an emergency. The flexibility of MIMIX enabled the bank to keep the databases in synch at all times. In 1998, a potentially serious incident occurred at the bank. An operator working on the test system meant to restore some data to the test libraries. Unfortunately, they were copied and loaded into the production system. This overwrote all payments that had to be sent out later that afternoon, leaving the bank in a critical position. After detecting the error, the operator immediately switched over to the mirrored system on the backup machine located at the BRS center. That system held a mirrored, real time copy of all the bank’s transactions for the day. By clicking a button, the operator initiated a full restore from the backup system to the production system, ensuring synchronization of the two databases. This process required only about 25 minutes. Without high availability application software (MIMIX) the bank would have had to restore from a mid-day backup (which requires about 10 hours worth of work). In addition, about 20 bank staff members would have had to return to the bank and re-enter all the lost transactions at overtime rates. The estimated costs of these activities was around $28K. Even greater costs would have resulted from the one day interest penalties for late or nonexistent outgoing payments. The fastest and most comprehensive way to ensure maximum uptime and high availability is through added redundancy and fault-tolerant models. With the 28 High Availability on the AS/400 System: A System Manager’s Guide implementation of high availability software (MIMIX) at the BRS Center, the bank achieved the highest level of disaster prevention and continuous operations in over 20 years. The strategy provided the availability management, flexibility, and reliability needed to maintain business continuity in the event of a disaster at the location. 3.2.1 Benefits Looking back, SAAA Bank found that using high availability management software at the BRS center provided these benefits: • Maintained uninterrupted business processing, despite operator errors and environmental disasters, such as floods, fires, powers outages, and terrorist acts • Dramatically reduced the cost of disasters, especially disasters that arise at the user level • Sustained the bank’s reputation for reliability: Because the bank acts as a clearing house for several other banks, SAAA Bank had to ensure that the system was operational and that payments were sent out in a timely manner • Simplified the recovery process: High availability software freed the IS department from lengthy restore projects and eliminated the need to spend time and money re-entering lost data • Enabled the IS department to switch to the backup database at the BRS center when required 3.3 A large retail company: Scenario 3 Retailing is a new area for very high availability. As retail companies enter the area of e-commerce, the business demands are considerable. Companies operating in a global marketplace must have their systems online 24 hours a day, every day, all year long. This section describes an example of a large retail company, referred to as EDA Retail for the purposes of the discussion. EDA Retail is a Danish lumber wholesale business. They also own and run a chain of hardware retail stores across Denmark, Norway, and Sweden, supplying everything for the do-it-yourself person. These hardware stores run a Point-of-Sale (POS) solution from IBM, which connects to a large AS/400 system. At the time this information was gathered, the system was a 12-way model 740 with 1 TB of DASD. All inventory and customer data is stored on this one AS/400 system. The connection to the AS/400 system is the lifeline to all of the stores. If the connection to the AS/400 system is down, or if the AS/400 system is not up and running at all times, EDA Retail employees cannot take returns, check inventory, check customer data, or perform any related functions while selling goods to their customers. The unavailability of information developed into a unacceptable problem for EDA Retail. They had grown from a big to a very large enterprise during the four years from when they installed the AS/400 system. Management determined that a breakdown of the AS/400 system would cost the enterprise about 40,000 U.S. High availability example solutions 29 dollars per hour. In addition, planned shutdowns, such as that necessary for the daily backup, system maintenance, upgrades, and application maintenance were becoming a problem. The problem multiplied when a chain of stores in Sweden was taken over and expected to run on the EDA Retail’s system. These acquired stores had longer opening hours than the corresponding stores in Denmark, which increased the necessity for long opening hours of the AS/400 system at EDA Retail. The IBM team did a thorough analysis of EDA Retail’s installation in order to develop an environment with no single point of failure. At this time, the IBM team responsible for the customer proposed an extra system for backup, using dedicated communication lines between the two systems. The proposed solution involved a backup AS/400 system in a new, separate machine room of its own, and an OptiConnect line between the two AS/400 systems. A key feature in this environment was a High Availability solution from Vision Solutions. This solution mirrors all data on the system to a target (backup) system on a real time basis. In addition to the necessary hardware, this allows EDA Retail to become immune to disasters, planned shutdowns (for example, system maintenance), and unplanned shutdowns (such as system failures). In the case of a shutdown, either planned or unplanned, EDA Retail can switch users to the backup system. Within thirty minutes of the breakdown, operations continue. The primary test of the project was a Role Swap, in which roles are switched between the two systems. The system normally serving as the source system becomes the target system, and vice versa. When the roles are switched and mirroring is started in reverse mode, the user’s connection to the new source system is tested to determine if they can run their applications as they normally would. A success is illustrated when the correct data, authorizations, and other objects or information are successfully mirrored, with the transactions also mirrored to the new target system. The successful completion of a second test of this nature marked the end of the implementation project and the beginning of normal maintenance activities. The users had access to the information they required even while the backup was taken. Their information remained available at all times except during the role swap. 3.4 A small manufacturing company: Scenario 4 This solution applies to manufacturing and to any small business with limited information technology resources. These firms do not typically operate seven days a week, but they may operate 24 hours per day, Monday through Friday. The business requires that the systems be available when they are needed with a minimum of intervention. If the system fails, they need a backup system on which to run. They can then contact their external I/T services supplier for support. This support can take a day or so to arrive. In the meantime, they run their business with unprotected information. 30 High Availability on the AS/400 System: A System Manager’s Guide It is important to note that they are still running in this situation. The third party services provider then arrives, fixes the problem, and re-synchronizes the systems. DMT Industries is a manufacturer of medical supplies which has become highly globalized over the last several years. Their globalization raises the demand on the IT division to provide a 24 x 7 operation. DMT Industries has various platforms installed. One of the strategic business applications, which must be available at all times, is running on the AS/400 platform. A total of 1,500 users from all over the world have access to the system, which is located in Denmark. Until early 1998, DMT Industries could offer only 20 hours of system availability per day. The remaining four hours were used for the nightly backup. As it became necessary to offer 24 hour availability, they decided to implement a high availability solution. The installation then consisted of two identical AS/400 systems, located in separate machine rooms, a dedicated 100 Mbit Ethernet connection between them, and software from Vision Solutions. This enabled DMT Medical to perform their nightly backup on the backup system to which all production data was mirrored. They simply stopped the mirrored data from the production machine from being applied on the backup system while the backup ran. The users could continue operations on the production system as required. 3.5 A distribution company: Scenario 5 This example represents a three-tiered SAP solution (reportedly the largest site worldwide). TVBCo is a distribution company utilizing J.D. Edwards (JDE) applications. The company focuses on expanding knowledge about human protein molecules and transforming them by means of biotechnological methods and clinical tests. TVBCo had developed into a completely integrated worldwide pharmaceutical business. Its European Distribution Center in Holland houses a central IBM AS/400e production system, accessed by a large number of TVBCos branches in Europe. A second AS/400e machine is used for testing, development, and backup. The production AS/400e system plays a critical role in TVBCos European operations, so the company wanted to reduce its downtime to a minimum. In cooperation with IBM, TVBCo started developing an availability management plan and purchased two AS/400e systems. MIMIX availability management software solution was installed. High availability software immediately replicated all production system transactions on a defined backup system and performed synchronization checking to verify data integrity. Critical data and other items were always present and available on the backup system so that users could switch to that system and continue working if an unplanned outage occured. Aside from the immediate safekeeping of all data, high availability software maintained an accurate, up-to-date copy of the system setup on the back-up machine. For that purpose, all user profiles, subsystem descriptions, and other High availability example solutions 31 objects were replicated. High availability software also controlled the production system and the actual switchover in case of a failure. The backup machine monitored whether the production machine was still on. The moment it detected that something was wrong, it warned the system operator, who could decide to switch over. When that decision was made, all necessary actions had already been set out in a script so that the operator only had to intervene in exceptional cases. The TVBCo customized procedure included sending warnings to all users, releasing the backup database, activating subsystems and backup users, and switching interactive users to the backup system. TVBCo soon decided to start using high availability software as its worldwide standard for processing and invoicing. The data changed continuously, and a traditional backup could not be produced because too few stable points existed in the day. With the chosen high availability solution, the backup was made while the file was active. Therefore, taking the application offline was not necessary. In addition, the high availability solution backup ran, on average, only a couple of seconds (sometimes even less) behind production. Contrast this if a tape were made every hour (the backup could have run up to one hour behind in real time). The second application, an electronic data interchange (EDI) package, automates the purchase of supplies. This package retrieves batch data from the computer networks managed by the company’s suppliers. This process is not continuous so a more traditional backup method may be suitable. However, after estimating the considerable effort required to adapt the batch protocols and DDM files, it was decided to use high availability software. Because their chosen high availability software solution kept the backup data current (almost to the second), the company did not need to request that the batches be sent again when a problem occured with the production machine. Also, high availability software allowed the backup to be made in batches (for example, to relieve the network during peak hours). The third critical application replicated by high availability software is a DSI logistics program. Since this application not only generates packing lists and is used for updating the contents of the warehouse in the J.D. Edwards database, it must have information about the most recent setup of the warehouse. This information must, of course, remain available when the production equipment breaks down. Otherwise, the orders may be processed but the forklift operators would not know where to unload the merchandise. Like the J.D. Edwards application, the input and output of this application (and, therefore, the replicating) is a real time process. The data from the J.D. Edwards application and the IBM application are copied to the backup while the file is active. The batch processes from a variety of platforms, are copied to the AS/400e production system at scheduled intervals. With high availability software, it is not necessary to take the application offline because the backup is made while the file is active. High availability software also controls the production system and actual switchover in case of a failure. TVBCos data center is supported by a staff of 70. The hardware consists of two AS/400e Model 500 systems connected with TCP on ethernet adapters. 32 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 33 Part 2. AS/400 high availability functions Many areas of a system implementation contribute to the availability rating. Each option provides differing degrees of availability. Part II discusses the hardware, storage options, operating system features, and network components that contribute to a high availability solution. Appendix F, “Cost components of a business case” on page 169, provides a reference to compare the availability options of journaling, mirrored protection, device parity protection, and others. 34 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 35 Chapter 4. Hardware support for single system high availability The selection of hardware components provides a basis for system availability options. This chapter discusses some of the considerations for protecting your data and available features and tools from a hardware perspective on a single system and options using multiple systems. Hardware selections greatly contribute to a system’s high availability characteristics. This chapter discusses: • Data protection options • Concurrent maintenance • Hot spares • OptiConnect • Clusters • LPAR • Power options • Tape devices 4.1 Protecting your data Your first defense against data loss is a good backup and recovery strategy. You need a plan for regularly saving the information on your system. In addition to having a working backup and recovery strategy, you should also employ some form of data protection on your system. When you think about protecting your system from data loss, make the following considerations: • Recovery: Can you retrieve the information that you lost, either by restoring it from backup media or by creating it again? • Availability: Can you reduce or eliminate the amount of time that your system is unavailable after a problem occurs? • Serviceability: Can you service it without affecting the data user? Disk protection can help prevent data loss and keep your system from stopping if you experience a disk failure. The topics that follow provide information on the different types of disk protection, as well as using the types with one another. 4.2 Disk protection tools Several disk availability tools are available for reducing or eliminating system downtime. They also help with data recovery after a disk failure. The tools include: Although disk protection can reduce downtime or make recovery faster, it is not a replacement for regular backups. Disk protection cannot help you recover from a complete system loss, a processor failure, or a program failure. Remember 36 High Availability on the AS/400 System: A System Manager’s Guide • Mirrored protection • Device parity protection • Auxiliary storage pools (ASPs) • Others These disk protection methods help protect your data. You can use these methods in different combinations with one another. Your choice of disk tools determines your level of disk protection and vice versa. Figure 6 shows the different levels of availability. Figure 6. Levels of availability High availability, as shown in Figure 6, includes mirroring, RAID-5, offline backup, UPS, CPM, clustering, journaling, auxiliary storage pools, SMAPP, commitment control, and save-while-active components. Topics not covered in this Redpaper are discussed in The System Administrator’s Companion to AS400 Availability and Recovery, SG24-2161, and the Backup and Recovery, SC41-5304. 4.3 Disk mirroring Mirrored protection is an OS/400 high availability function in the licensed internal code that protects data from loss due to failure, or due to damage to a disk-related component. Mirrored protection is a high availability software function that duplicates disk-related hardware components to keep the AS/400 system available if one of the disk components fails. It prevents a loss of data in case of a disk-related hardware failure. Mirroring is used on any model of the AS/400 system and is a part of the Licensed Internal Code (LIC). Bus 2 IOP Mirroring Bus 3 IOP Controller Controller * Journaling * Auxiliary storgae pools * SMAAP * Commitment control Clustered systems Continuous power Backup power supply UPS Tape Array 1 Array 2 Disk Bus controller RAID - 5 Hardware support for single system high availability 37 Different levels of mirrored protection are possible, depending on what hardware is duplicated. The hardware components that can be duplicated include: • Disk units (to provide the lowest (relative) level of availability) • Disk controllers • Disk I/O processors (IOP) • Buses (to provide the highest (relative) level of availability) Mirroring protection is configured by the ASP. For optimum protection, there must be an even number of components at each level of protection. The system remains available during a disk, controller, IOP, or bus failure if the failing component and hardware components that are attached to it are duplicated. When you start mirrored protection or add disk units to an ASP that has mirrored protection, the system creates mirrored pairs using disk units that have identical capacities. Data is protected because the system keeps two copies of data on two separate disk units. When a disk-related component fails, the system can continue to operate without interruption by using the mirrored copy of the data until the failed component is repaired. The overall goal is to protect as many disk-related components as possible. To provide maximum hardware redundancy and protection, the system attempts to pair disk units that are attached to different controllers, input/output processors, and buses. If you have a multi-bus system or a large single-bus system, consider using mirrored protection. The greater the number of disk units attached to a system, the more frequently disk-related hardware failures occur. This is because there are more individual pieces of hardware that can fail. Therefore, the possibility of data loss or loss of availability as a result of a disk or other hardware failure is more likely. Also, as the amount of disk storage on a system increases, the recovery time after a disk storage subsystem hardware failure increases significantly. Downtime becomes more frequent, more lengthy, and more costly. Mirroring can be used on any AS/400 system model. The system remains available during a failure if a failing component and the hardware components that are attached to it have been duplicated. See 4.8, “Concurrent maintenance” on page 54, to better understand the maintenance aspect. Remote mirroring support allows you to have one mirrored unit within a mirrored pair at the local site, and the second mirrored unit at a remote site. For some systems, standard DASD mirroring remains the best option. Evaluate the uses and needs of your system, consider the advantages and disadvantages of each type of mirroring support, and decide which is best for you. Refer to B.2.3, “Mirrored protection: How it works” on page 151, for a more detailed description. 4.3.1 Standard mirrored protection Standard DASD mirroring support requires that both disk units of the load source mirrored pair (unit 1) are attached to the multi-function I/O processor (MFIOP). This option allows the system to initial program load (IPL) from either load source 38 High Availability on the AS/400 System: A System Manager’s Guide in the mirrored pair. The system can dump main storage to either load source if the system terminates abnormally. However, since both load source units must be attached to the same I/O processor (IOP), controller level protection is the best mirroring protection possible for the load source mirrored pair. How the AS/400 system addresses storage Disk units are assigned to an auxiliary storage pool (ASP) on a unit basis. The system treats each storage unit within a disk unit as a separate unit of auxiliary storage. When a new disk unit is attached to the system, the system initially treats each storage unit within as non-configured. Through Dedicated Service Tools (DST) options, you can add these nonconfigured storage units to either the system ASP or a user ASP of your choice. When adding non-configured storage units, use the serial number information that is assigned by the manufacturer to ensure that you are selecting the correct physical storage unit. Additionally, the individual storage units within the disk unit can be identified through the address information that can be obtained from the DST Display Disk Configuration display. When you add a nonconfigured storage unit to an ASP, the system assigns a unit number to the storage unit. The unit number can be used instead of the serial number and address. The same unit number is used for a specific storage unit even if you connect the disk unit to the system in a different way. When a unit has mirrored protection, the two storage units of the mirrored pair are assigned the same unit number. The serial number and the address distinguish between the two storage units in a mirrored pair. To determine which physical disk unit is being identified with each unit number, note the unit number assignment to ensure correct identification. If a printer is available, print the DST or SST display of your disk configuration. If you need to verify the unit number assignment, use the DST or SST Display Configuration Status display to show the serial numbers and addresses of each unit. The storage unit that is addressed by the system as Unit 1 is always used by the system to store licensed internal code and data areas. The amount of storage that is used on Unit 1 is quite large and varies depending on your system configuration. Unit 1 contains a limited amount of user data. Because Unit 1 contains the initial programs and data that is used during an IPL of the system, it is also known as the load source unit. The system reserves a fixed amount of storage on units other than Unit 1. The size of this reserved area is 1.08 MB per unit. This reduces the space available on each unit by that amount. 4.3.2 Mirrored protection: Benefits With the best possible mirrored protection configuration, the system continues to run after a single disk-related hardware failure. On some system units, the failed hardware can sometimes be repaired or replaced without having to power down the system. If the failing component is one that cannot be repaired while the system is running, such as a bus or an I/O processor, the system usually continues to run after the failure. Maintenance can be deferred and the system can be shut down normally. This helps to avoid a longer recovery time. Even if your system is not a large one, mirrored protection can provide valuable protection. A disk or disk-related hardware failure on an unprotected system leaves your system unusable for several hours. The actual time depends on the Hardware support for single system high availability 39 kind of failure, the amount of disk storage, your backup strategy, the speed of your tape unit, and the type and amount of processing the system performs. 4.3.3 Mirrored protection: Costs and limitations The main cost of using mirrored protection lies in additional hardware. To achieve high availability, and prevent data loss when a disk unit fails, you need mirrored protection for all the auxiliary storage pools (ASPs). This usually requires twice as many disk units. If you want continuous operation and data loss prevention when a disk unit, controller, or I/O processor fails, you need duplicate disk controllers as well as I/O processors. A model upgrade can be done to achieve nearly continuous operation and to prevent data loss when any of these failures occur. This includes a bus failure. If Bus 1 fails, the system cannot continue operation. Because bus failures are rare, and bus-level protection is not significantly greater than I/O processor-level protection, you may not find a model upgrade to be cost-effective for your protection needs. Mirrored protection has a minimal reduction in system performance. If the buses, I/O processors, and controllers are no more heavily loaded on a system with mirrored protection than they would be on an equivalent system without mirrored protection, the performance of the two systems should be approximately the same. In deciding whether to use mirrored protection on your system, evaluate and compare the cost of potential downtime against the cost of additional hardware over the life of the system. The additional cost in performance or system complexity is usually negligible. Also consider other availability and recovery alternatives, such as device parity protection. Mirrored protection normally requires twice as many storage units. For concurrent maintenance and higher availability on systems with mirrored protection, other disk-related hardware may be required. Limitations Although mirrored protection can keep the system available after disk-related hardware failures occur, it is not a replacement for save procedures. There can be multiple types of disk-related hardware failures, or disasters (such as flood or sabotage) where recovery requires backup media. Mirrored protection cannot keep your system available if the remaining storage unit in the mirrored pair fails before the first failing storage unit is repaired and mirrored protection is resumed. If two failed storage units are in different mirrored pairs, the system is still available. Normal mirrored protection recovery is done because the mirrored pairs are not dependent on each other for recovery. If a second storage unit of the same mirrored pair fails, the failure may not result in a data loss. If the failure is limited to the disk electronics, or if the service representative can successfully use the Save Disk Unit Data function to recover all of the data (a function referred to as “pump”), no data is lost. 40 High Availability on the AS/400 System: A System Manager’s Guide If both storage units in a mirrored pair fail causing data loss, the entire ASP is lost and all units in the ASP are cleared. Be prepared to restore your ASP from the backup media and apply any journal changes to bring the data up to date. 4.3.4 Determining the level of mirrored protection The level of mirrored protection determines whether the system continues running when different levels of hardware fails. The level of protection is the amount of duplicate disk-related hardware that you have. The more mirrored pairs that have higher levels of protection, the more often your system is usable when disk-related hardware fails. You may decide that a lower level of protection is more cost-effective for your system than a higher level. The four levels of protection, from lowest to highest, are as follows: • Disk unit-level protection • Controller-level protection • Input/output processor-level protection • Bus-level protection When determining what level of protection is adequate, consider the relative advantages of each level of protection with respect to the following considerations: • The ability to keep the system operational during a disk-related hardware failure. • The ability to perform maintenance concurrently with system operations. To minimize the time that a mirrored pair is unprotected after a failure, you may want to repair failed hardware while the system is operating. During the start mirrored protection operation, the system pairs the disk units to provide the maximum level of protection for the system. When disk units are added to a mirrored ASP, the system pairs only those disk units that are added without rearranging the existing pairs. The hardware configuration includes the hardware and how the hardware is connected. The level of mirrored protection determines whether the system continues running when different levels of hardware fail. Mirrored protection always provides disk unit-level protection that keeps the system available for a single disk unit failure. To keep the system available for failures of other disk-related hardware requires higher levels of protection. For example, to keep the system available when an I/O processor (IOP) fails, all of the disk units attached to the failing IOP must have mirrored units attached to different IOPs. The level of mirrored protection also determines whether concurrent maintenance can be done for different types of failures. Certain types of failures require concurrent maintenance to diagnose hardware levels above the failing hardware component. An example would be diagnosing a power failure in a disk unit that requires resetting the I/O processor to which the failed disk unit is attached. In this case, IOP-level protection is required. The level of protection you get depends on the hardware you duplicate. If you duplicate disk units, you require disk unit-level protection. If you also duplicate disk unit controllers, you require controller-level protection. If you duplicate Hardware support for single system high availability 41 input/output processors, you require IOP-level protection. If you duplicate buses, you require bus-level protection. Mirrored units always have, at least, disk unit-level protection. Because most internal disk units have the controller packaged along with the disk unit, they have at least controller-level protection. 4.3.4.1 Disk unit-level protection All mirrored storage units have a minimum of disk unit-level protection if they meet the requirements for starting mirrored protection (storage units are duplicated). If your main concern is protecting data and not high availability, disk unit-level protection may be adequate. The disk unit is the most likely hardware component to fail, and disk unit-level protection keeps your system available after a disk unit failure. Concurrent maintenance is often possible for certain types of disk unit failures with disk unit-level protection. Figure 7 shows disk unit-level protection. The two storage units make a mirrored pair. With disk unit-level protection, the system continues to operate during a disk unit failure. If the controller or I/O processor fails, the system cannot access data on either of the storage units of the mirrored pair, and the system is unusable. Figure 7. Disk unit level protection 4.3.4.2 Controller-level protection If the planned disk units do not require a separate controller, you already have controller-level protection for as many units as possible and do not need to do anything else. If your planned disk units do require a separate controller, add as many controllers as possible while keeping within the defined system limits. Then balance the disk units among them according to the standard system configuration rules. Bus Input/Output Programmer Disk Unit Disk Unit Controller 42 High Availability on the AS/400 System: A System Manager’s Guide To keep your system available when a controller fails, consider using concurrent maintenance. The controller must be dedicated to the repair action in this process. If any disk units attached to the controller do not have controller-level protection, concurrent maintenance is not possible. To achieve controller-level protection, all disk units must have a mirrored unit attached to a different controller. Most internal disk units have their controller packaged as part of the disk unit, so internal disk units generally have at least controller-level protection. Use problem recovery procedures in preparation for isolating a failing item or to verify a repair action. Figure 8 illustrates controller-level protection. Figure 8. Input/output controller-level protection The two storage units make a mirrored pair. With controller-level protection, the system can continue to operate after a disk controller failure. If the I/O processor fails, the system cannot access data on either of the disk units, and the system is unusable. 4.3.4.3 IOP-level protection If you want IOP-level protection and you do not already have the maximum number of IOPs on your system, add as many IOPs as possible while keeping within the defined system limits. Then, balance the disk units among them according to the standard system configuration rules. You may need to add additional buses to attach more IOPs. To achieve I/O processor-level protection, all disk units that are attached to an I/O processor must have a mirrored unit attached to a different I/O processor. On many systems, I/O processor-level protection is not possible for the mirrored pair for Unit 1. Bus Input/Output Programmer Disk Unit Disk Unit Controller Controller Hardware support for single system high availability 43 IOP-level protection is useful to: • Keep your system available when an I/O processor fails • Keep your system available when the cable attached to the I/O processor fails • Concurrently repair certain types of disk unit failures or cable failures. For these failures, concurrent maintenance needs to reset the IOP. If any disk units that are attached to the IOP do not have IOP-level protection, concurrent maintenance is not possible. Figure 9 illustrates IOP-level protection. Figure 9. IOP-level protection The two storage units make a mirrored pair. With IOP-level protection, the system continues to operate after an I/O processor failure. The system becomes unusable only if the bus fails. 4.3.4.4 Bus-level protection If you want bus-level protection, and you already have a multiple-bus system, nothing must be done. If your system is configured according to standard configuration rules, the mirrored pairing function pairs up storage units to provide bus-level protection for as many mirrored pairs as possible. If you have a single-bus system, you can add additional buses as a feature option on systems supporting multiple buses. Bus-level protection can allow the system to run when a bus fails. However, bus-level protection is often not cost-effective because of the following problems: • If Bus 1 fails, the system is not usable. • If a bus fails, disk I/O operations may continue, but so much other hardware is lost (such as work stations, printers, and communication lines) that, from a practical standpoint, the system is not usable. Bus Input/Output Programmer Disk Unit Controller Disk Unit Controller Input/Output Programmer 44 High Availability on the AS/400 System: A System Manager’s Guide • Bus failures are rare compared with other disk-related hardware failures. • Concurrent maintenance is not possible for bus failures. To achieve bus-level protection, all disk units that are attached to a bus must have a mirrored unit attached to a different bus. Bus-level protection is not possible for Unit 1. Figure 10 illustrates bus-level protection. Figure 10. Bus-level protection The two storage units make a mirrored pair. With bus-level protection, the system continues to operate after a bus failure. However, the system cannot continue to operate if Bus 1 fails. 4.3.5 Determining the hardware required for mirroring To communicate with the rest of the system, disk units are attached to controllers, which are attached to I/O processors, which are attached to buses. The number of each of these types of disk-related hardware that are available on the system directly affects the level of possible protection. Bus 1 Input/Output Programmer Disk Unit Controller Bus 2 Input/Output Programmer Disk Unit Controller Hardware support for single system high availability 45 To provide the best protection and performance, each level of hardware should be balanced under the next level of hardware. That is, the disk units of each device type and model should be evenly distributed under the associated controllers. The same number of controllers should be under each I/O processor for that disk type. Balance the I/O processors among the available buses. To plan what disk-related hardware is needed for your mirrored system, plan the total number and type of disk units (old and new) that are needed on the system, as well as the level of protection for the system. It is not always possible to plan for and configure a system so that all mirrored pairs meet the planned level of protection. However, it is possible to plan a configuration in which a very large percentage of the disk units on the system achieve the desired level of protection. When planning for additional disk-related hardware, perform the following steps: 1. Determine the minimum hardware that is needed for the planned disk units to function. Plan for one disk unit size (capacity) at a time. 2. Plan the additional hardware needed to provide the desired level of protection for each disk unit type Planning the minimum hardware needed to function Various rules and limits exist on how storage hardware can be attached. The limits may be determined by hardware design, architecture restrictions, performance considerations, or support concerns. For each disk unit type, first plan for the controllers that are needed and then for the I/O processors that are needed. After planning the number of I/O processors needed for all disk unit types, use the total number of I/O processors to plan for the number of buses that are needed. 4.3.6 Mirroring and performance When mirrored protection is started, most systems show little difference in performance. In some cases, mirrored protection can improve performance. Generally, functions that mostly perform read operations experience equal or better performance with mirrored protection. This is because read operations have a choice of two storage units to read from. The unit with the faster expected response time is selected. Operations that mostly perform write operations (such as updating database records) may see slightly reduced performance on a system that has mirrored protection because all changes must be written to both storage units of the mirrored pair. Therefore, restore operations are slower. In some cases, if the system ends abnormally, the system cannot determine whether the last updates were written to both storage units of each mirrored pair. If the system is not sure that the last changes were written to both storage units of the mirrored pair, the system synchronizes the mirrored pair by copying the data in question from one storage unit of each mirrored pair to the other storage unit. The synchronization occurs during the IPL that follows the abnormal system end. If the system can save a copy of main storage before it ends, the synchronization process takes just a few minutes. If not, the synchronization process can take much longer. The extreme case could be close to a complete synchronization. 46 High Availability on the AS/400 System: A System Manager’s Guide If you have frequent power outages, consider adding an uninterruptible power supply (UPS) to your system. If main power is lost, the UPS allows the system to continue. A basic UPS supply provides the system with enough time to save a copy of main storage before ending. This prevents a long recovery. Both storage units of the load source mirrored pair must be powered by the basic UPS. 4.3.7 Determining the extra hardware required for performance Mirrored protection normally requires additional disk units and input/output processors. However, in some cases, you may need additional hardware to achieve the level of desired performance. Use these points to decide how much extra hardware you may need: • Mirrored protection causes a minor increase in central processing unit usage (approximately 1% to 2%). • Mirrored protection requires storage in the machine pool for general purposes and for each mirrored pair. If you have mirrored protection, increase the size of your machine pool by approximately 12 KB for each 1 GB of mirrored disk storage (12 KB for 1 GB DASD, 24 KB for 2 GB DASD, etc.). • During synchronization, mirrored protection uses an additional 512 KB of memory for each mirrored pair being synchronized. The system uses the pool with the most storage. • To maintain equivalent performance after starting mirrored protection, your system should have the same ratio of disk units to I/O processors as it did before. To add I/O processors, you may need to upgrade your system for additional buses. Note: Because of the limit on buses and I/O processors, you may not be able to maintain the same ratio of disk units to I/O processors. In this case, system performance may degrade. 4.4 Remote DASD mirroring support Standard DASD mirroring support requires that both disk units of the load source mirrored pair (Unit 1) are attached to the Multi-function I/O Processor (MFIOP). This allows the system to IPL from either load source in the mirrored pair and allows the system to dump main storage to either load source if the system ends abnormally. However, since both load sources must be attached to the same I/O Processor (IOP), the best mirroring protection possible for the load source mirrored pair is controller-level protection. To provide a higher level of protection for your system, use remote load source mirroring and remote DASD mirroring. Remote DASD mirroring support, when combined with remote load source mirroring, mirrors the DASD on local optical buses with the DASD on optical buses that terminate at a remote location. In this configuration, the entire system, including the load source, can be protected from a site disaster. If the remote site is lost, the system can continue to run on the DASD at the local site. If the local DASD and system unit are lost, a new system unit can be attached to the set of DASD at the remote site, and system processing can be resumed. Remote DASD mirroring, like standard DASD mirroring, supports mixing device-parity-protected disk units in the same ASP with mirrored disk units. The device parity DASD can be located at either the local or the remote site. However, Hardware support for single system high availability 47 if a site disaster occurs at the site containing the device parity DASD, all data in the ASPs containing the device parity DASD is lost. Remote mirroring support makes it possible to divide the disk units on your system into a group of local DASD and a group of remote DASD. The remote DASD is attached to one set of optical buses and the local DASD to another set of buses. The local and remote DASD can be physically separated from one another at different sites by extending the appropriate optical buses to the remote site. 4.4.1 Remote load source mirroring Remote load source mirroring support allows the two disk units of the load source to be on different IOPs or system buses. This provides IOP- or bus-level mirrored protection for the load source. However, in such a configuration, the system can only IPL from, or perform a main storage dump to, the load source attached to the MFIOP. If the load source on the MFIOP fails, the system can continue to run on the other disk unit of the load source mirrored pair, but the system is not able to IPL or perform a main storage dump until the load source attached to the MFIOP is repaired and usable. 4.4.2 Enabling remote load source mirroring To use remote load source mirroring support, remote load source mirroring must first be enabled. Mirrored protection must then be started for ASP 1. If remote load source mirroring support is enabled after mirrored protection has already been started for ASP 1, the existing mirrored protection and mirrored pairing of the load source must not change. Remote load source mirroring support can be enabled in either the DST or the SST environment. If you attempt to enable remote load source mirroring, and it is currently enabled, the system displays a message that remote load source mirroring is already enabled. There are no other errors or warnings for enabling remote load source mirroring support. If the remote load source is moved to the MFIOP, the IOP and system may not recognize it because of the different DASD format sizes used by different IOPs. If the remote load source is missing after it has been moved to the MFIOP, use the DST Replace disk unit function to replace the missing load source with itself. This causes the DASD to be reformatted so that the MFIOP can use it. The disk unit is then synchronized with the active load source. Remote load source mirroring may be disabled from either DST or SST. However, disabling remote load source mirroring is not allowed if there is a load source disk unit on the system that is not attached to the MFIOP. If you attempt to disable remote load source mirroring support and it is currently disabled, the system displays a message that remote load source mirroring is already disabled. 4.4.3 Using remote load source mirroring with local DASD Remote load source mirroring can be used to achieve IOP-level or bus-level protection of the load source mirrored pair, even without remote DASD or buses on the system. There is no special setup required, except for ensuring that a disk unit of the same capacity as the load source is attached to another IOP or bus on the system. To achieve bus-level protection of all mirrored pairs in an ASP, configure your system so that no more than one-half of the DASD of any given capacity in that ASP are attached to any single bus. To achieve IOP-level 48 High Availability on the AS/400 System: A System Manager’s Guide protection of all mirrored pairs in an ASP, have no more than one-half of the DASD of any given capacity in the ASP attached to any single IOP. There is no special start mirroring function for remote load source support. The system detects that remote load source mirroring is enabled and automatically pairs up disk units to provide the best level of possible protection. It is not possible to override or influence the pairing of the disk units other than by changing the way the hardware of the system is connected and configured. Normal mirroring restrictions that concern total ASP capacity, an even number of disk units of each capacity, and other such considerations, apply. 4.4.3.1 Remote DASD mirroring: Advantages Advantages of remote DASD mirroring include: • Providing IOP-level or bus-level mirrored protection for the load source • Allowing the DASD to be divided between two sites, mirroring one site to another, to protect against a site disaster 4.4.3.2 Remote DASD mirroring: Disadvantages Disadvantages of remote DASD mirroring include: • A system that uses Remote DASD Mirroring is only able to IPL from one DASD of the load source mirrored pair. If that DASD fails and cannot be repaired concurrently, the system cannot be IPLed until the failed load source is fixed and the remote load source recovery procedure is performed. • When Remote DASD Mirroring is active on a system, and the one load source the system can use to IPL fails, the system cannot perform a main storage dump if the system ends abnormally. This means that the system cannot use the main storage dump or continuously-powered main store (CPM) to reduce recovery time after a system crash. It also means that the main storage dump is not available to diagnose the problem that causes the system to end abnormally. 4.5 Planning your mirroring installation If you decide that remote DASD mirroring is right for your system, prepare your system and then start site-to-site mirroring. Determine whether your system is balanced and meets standard configuration rules. The system must be configured according to the standard rules for the mirrored pairing function to pair up storage units to provide the best possible protection from the hardware that is available. Plan for the new units to add for each ASP. If you plan to start mirrored protection on a new system, that system is already configured according to standard configuration rules. If you are using an older system, it may not follow the standard rules. However, wait until after attempting to start mirrored protection before reconfiguring any hardware. When considering mirrored protection, review these planning steps: 1. Decide which ASP or ASPs to protect. 2. Determine the disk storage capacity requirements. 3. Determine the level of protection that is needed for each mirrored ASP. 4. Determine what extra hardware is necessary for mirrored protection. 5. Determine what extra hardware is needed for performance. Hardware support for single system high availability 49 In general, the units in an ASP should be balanced across several I/O processors, rather than all being attached to the same I/O processor. This provides better protection and performance. Plan the user ASPs that have mirrored protection and determine what units to add to the ASPs. Refer to Chapter 5, “Auxiliary storage pools (ASPs)” on page 63, for more information about ASPs. 4.5.1 Comparing DASD management with standard and remote mirroring For the most part, managing DASD with remote mirroring is similar to managing DASD with standard mirroring. The differences are in how you add disk units and how you restore mirrored protection after a recovery. Adding disk units Unprotected disk units must be added in pairs just as with general mirroring. To achieve remote protection of all added units, one half of the new units of each capacity of DASD should be in the remote group and one half in the local group. Single device-parity protected units may be added to ASPs using remote mirroring. However, the ASP is not protected against a site disaster. 4.6 Device parity protection Device parity protection is a high availability hardware function (also known as RAID-5) that protects data from loss due to a disk unit failure or because of damage to a disk. It allows the system to continue to operate when a disk unit fails or disk damage occurs. The system continues to run in an exposed mode until the damaged unit is repaired and the data is synchronized to the replaced unit. To protect data, the disk controller or input/output processor (IOP) calculates and saves a parity value for each bit of data. Parity protection is built into many IOPs. It is activated for disk units that are attached to those IOPs. Device parity involves calculating and saving a parity value for each bit of data. Conceptually, the parity value is computed from the data at the same location on each of the other disk units in the device parity set. When a disk failure occurs, the data on the failing unit is reconstructed using the saved parity value and the values of bits in the same location on other disks. The system continues to run while the data is being reconstructed. Logically, the implementation of device parity protection is similar to the system checksum function. However, device parity is built into the hardware. Checksum, on the other hand, is started or stopped using configuration options on the AS/400 system menu. If a failure occurs, correct the problem quickly. In the unlikely event that another disk fails in the same parity set, you may lose data. Recommendation 50 High Availability on the AS/400 System: A System Manager’s Guide The overall goal of device parity protection is to provide high availability and to protect data as inexpensively as possible. If possible, protect all the disk units on your system with device parity protection or mirrored protection. This prevents the loss of information when a disk failure occurs. In many cases, you can also keep your system operational while a disk unit is being repaired or replaced. Before using device parity protection, note the benefits that are associated with it, as well as the costs and limitations. Some device parity protection advantages: • It can prevent your system from stopping when certain types of failures occur. • It can speed up your recovery process for certain types of failures, such as a site disaster or an operator or programmer error. • Lost data is automatically reconstructed by the disk controller after a disk failure. • The system continues to run after a single disk failure. • A failed disk unit can be replaced without stopping the system. • Device parity protection reduces the number of objects that are damaged when a disk fails. Some device parity protection disadvantages: • It is not a substitute for a backup and recovery strategy. • It does not provide protection from all types of failures, such as a site disaster or an operator or programmer error. • Device parity protection can require additional disk units to prevent slower performance. • Restore operations can take longer when you use device parity protection. System checksum is another disk protection method similar to device parity. Checksum is not supported on RISC systems, and it is not discussed in this Redpaper. You can find information on checksum in Backup and Recovery, SC41-5306. Note Device parity protection is not a substitute for a backup and recovery strategy. Device parity protection can prevent your system from stopping when certain types of failures occur. It can speed up your recovery process for certain types of failures. But device parity protection does not protect you from many types of failures, such as system outages that are caused by failures in other disk-related hardware (for example, disk controllers, disk I/O processors, or a system bus). Remember Hardware support for single system high availability 51 For information on planning for device parity protection, refer to Appendix B, “Planning for device parity protection” on page 147. 4.6.1 How device parity protection affects performance Device parity protection requires extra I/O operations to save the parity data. This may cause a performance problem. To avoid this problem, some IOPs contain a non-volatile write cache that ensures data integrity and provides faster write capabilities. The system is notified that a write operation is complete as soon as a copy of the data is stored in the write cache. Data is collected in the cache before it is written to a disk unit. This collection technique reduces the number of physical write operations to the disk unit. Because of the cache, performance is generally about the same on protected and unprotected disk units. Applications that have many write requests in a short period of time, such as batch programs, can adversely affect performance. A single disk unit failure can adversely affect the performance for both read and write operations. The additional processing that is associated with a disk unit failure in a device parity set can be significant. The decrease in performance is in effect until the failed unit is repaired (or replaced) and the rebuild process is complete. If device parity protection decreases performance too much, consider using mirrored protection. These topics provide additional details on how a disk unit failure affects performance: • Disk unit failure in a device parity protection configuration • Input/output operations during a rebuild process • Read operations on a failed disk unit • Write operations on a failed disk unit 4.6.1.1 Disk unit failure in a device parity protection configuration The write-assist device is suspended when a disk unit failure occurs in a subsystem with device parity protection. If the write-assist device fails, it is not used again until the repair operation is completed. The performance advantage of the write-assist device is lost until the disk unit is repaired. The subsystems with device parity protection are considered to be exposed until the synchronization process completes after replacing the failed disk unit. While the disk unit is considered exposed, additional I/O operations are required. 4.6.1.2 Input/output operations during a rebuild process I/O operations during the rebuild (synchronization) process of the failed disk unit may not require additional disk I/O requests. This depends on where the data is read from or written to on the disk unit that is in the synchronization process. For example: • A read operation from the disk area that already has been rebuilt requires one read operation. • A read operation from the disk area that has not been rebuilt is treated as a read operation on a failed disk unit. • A write operation to the disk that has already been rebuilt requires normal read and write operations (two read operations and two write operations). • A write operation to the disk area that has not been rebuilt is treated as a write operation to a failed disk unit. 52 High Availability on the AS/400 System: A System Manager’s Guide Note: The rebuild process takes longer when read and write operations to a replaced disk unit are also occurring. Every read request or every write request interrupts the rebuild process to perform the necessary I/O operations. 4.6.2 Using both device parity protection and mirrored protection Device parity protection is a hardware function. Auxiliary storage pools and mirrored protection are software functions. When you add disk units and start device parity protection, the disk subsystem or IOP is not aware of any software configuration for the disk units. The software that supports disk protection is aware of which units have device parity protection. These rules and considerations apply when mixing device parity protection with mirrored protection: • Device parity protection is not implemented on ASP boundaries. • Mirrored protection is implemented on ASP boundaries. • You can start mirrored protection for an ASP even if it currently has no units that are available for mirroring because they all have device parity protection. This ensures that the ASP is always fully protected, even if you add disks without device parity protection later. • When a disk unit is added to the system configuration, it may be device parity protected. • For a fully-protected system, you should entirely protect every ASP by device parity protection, by mirrored protection, or both. • Disk units that are protected by device parity protection can be added to an ASP that has mirrored protection. The disk units that are protected by device parity protection do not participate in mirrored protection (hardware protects them already). • When you add a disk unit that is not protected by device parity protection to an ASP that has mirrored protection, the new disk unit participates in mirrored protection. Disk units must be added to, and removed from, a mirrored ASP in pairs with equal capacities. • Before you start device parity protection for disk units that are configured (assigned to an ASP), you must stop mirrored protection for the ASP. • Before you stop device parity protection, you must stop mirrored protection for any ASPs that contain affected disk units. • When you stop mirrored protection, one disk unit from each mirrored pair becomes non-configured. You must add the non-configured units to the ASP again before starting mirrored protection. 4.7 Comparing the disk protection options There are several methods for configuring your system to take advantage of the disk protection features. Before selecting the disk protection options that you want to use, compare the extent of protection that each one provides. Hardware support for single system high availability 53 Table 2 provides an overview of the availability tools that can be used on the AS/400 system to protect against different types of failure. Table 2. Availability tools for the AS/400 system Be aware of the following considerations when selecting disk protection options: • With both device parity protection and mirrored protection, the system continues to run after a single disk failure. With mirrored protection, the system may continue to run after the failure of a disk-related component, such as a controller or an IOP. • When a second disk failure occurs, meaning that the system has two failed disks, the system is more likely to continue to run with mirrored protection than with device parity protection. With device parity protection, the probability of the system failing on the second disk failure can be expressed as P out of n. Here, P is the total number of disks on the system, and n is the number of disks in the device parity set that had the first disk failure. With mirrored protection, the probability of the system failing on the second disk failure is 1 out of n. What is needed Device parity protection Mirrored protection User ASPs Protection from data loss due to disk-related hardware failure Yes - See Note 1 Yes Yes - See Note 4 Maintain availability Yes Yes No Help with disk unit recovery Yes - See Note 1 Yes Yes - See Note 4 Maintain availability when disk controller fails See Note 3 Yes - See Note 2 No Maintain availability when disk I/O processor fails No Yes - See Note 2 No Maintain availability when disk I/O bus fails No Yes - See Note 2 No Site disaster protection No Yes - See Note 5 No Notes: 1. Load source unit and any disk units attached to the MFIOP are not protected. 2. Depends on hardware used, configuration, and level of mirrored protection. 3. With device parity protection using the 9337 Disk Array Subsystem, the system becomes unavailable if a controller is lost. 4. With device parity protection using the IOP feature, the system is available as long as the IOP is available. 5. Configuring ASPs can limit the loss of data and the recovery to a single ASP. 6. For site disaster protection, remote mirroring is required. 54 High Availability on the AS/400 System: A System Manager’s Guide • Device parity protection requires up to 25% additional disk capacity for storage of parity information. The actual increase depends on the number of disk units that are assigned to a device parity set. A system with mirrored protection requires twice as much disk capacity as the same system without mirrored protection. This is because all information is stored twice. Mirrored protection may also require more buses, IOPs, and disk controllers, depending on the level of protection that you want. Therefore, mirrored protection is usually a more expensive solution than device parity protection. • Usually, neither device parity protection or mirrored protection has a noticeable effect on system performance. In some cases, mirrored protection actually improves system performance. The restore time to disk units protected by device parity protection is slower than the restore time to the same disk devices without device parity protection activated. This is because the parity data must be calculated and written. 4.8 Concurrent maintenance Concurrent maintenance is the process of repairing or replacing a failed disk-related hardware component while using the system. Concurrent maintenance allows disks, I/O processors, adapters, power supplies, fans, CD-ROMs, and tapes to be replaced without powering down the server. On systems without mirrored protection, the system is not available when a disk-related hardware failure occurs. It remains unavailable until the failed hardware is repaired or replaced. However, with mirrored protection, the failing hardware can often be repaired or replaced while the system is being used. Concurrent maintenance support is a function of system unit hardware packaging. Not all systems support concurrent maintenance. Mirrored protection only provides concurrent maintenance when it is supported by the system hardware and packaging. The best hardware configuration for mirrored protection also provides for the maximum amount of concurrent maintenance. It is possible for the system to operate successfully through many failures and repair actions. For example, a failure of a disk head assembly does not prevent the system from operating. A replacement of the head assembly and synchronization of the mirrored unit can occur while the system continues to run. The greater your level of protection, the more often concurrent maintenance can be performed. On some models, the system restricts the level of protection for Unit 1 and its mirrored unit to controller-level protection only. Under some conditions, diagnosis and repair can require active mirrored units to be suspended. You may prefer to power down the system to minimize the exposure of operating with less mirrored protection. Some repair actions require that the system be powered down. Deferred maintenance is the process of waiting to repair or replace a failed disk-related hardware component until the system can be powered down. The system is available, although mirrored protection is reduced by whatever hardware components have failed. Deferred maintenance is only possible with mirrored protection or device parity protection. Hardware support for single system high availability 55 4.9 Redundancy and hot spare The basic rule for making a server system highly available is to use redundant parts where needed and affordable. Just like the basic idea behind Redundant Array of Independent Disks (RAID), all parts in a server are subject to be a single point of failure. These part can include: • The CPU • The power supply • The main logical board • The main memory • Adapter cards These are all parts that, if even one fails, the overall system is rendered unusable. To decrease the time involved in replacing a defective component, some customers consider implementing what is known as a hot spare. In effect, the customer keeps a local inventory of any component that either: • Has a higher failure rate than usual • Has a long lead-time when a replacement is required Note: The term hot spare typically refers to a disk unit. However, the same concept applies to a hot site or another system used for recovery. Planning for spare disk units Spare disk units can reduce the time the system runs without mirrored protection after a disk unit failure of a mirrored pair. If a disk unit fails, and a spare unit of the same capacity is available, that spare unit can be used to replace the failed unit. The system logically replaces the failed unit with the selected spare unit. It then synchronizes the new unit with the remaining good unit of the mirrored pair. Mirrored protection for that pair is again active when synchronization completes (usually less than an hour). However, it may take several hours (from the time a service representative is called until the failed unit is repaired and synchronized) before mirrored protection is again active for that pair. To make full use of spare units, you need at least one spare unit of each capacity that you have on your system. This provides a spare for any size of disk unit that may fail. 4.10 OptiConnect: Extending a single system An OptiConnect cluster is a collection of AS/400 systems connected by dedicated fiber optic system bus cables. The systems in an OptiConnect cluster share a common external optical system bus located in an expansion tower or frame. The system providing the shared system bus is called the hub system. Each system that plugs into this shared bus with an OptiConnect Bus Receiver card is called a satellite system. Each satellite system dedicates one of its external system buses that connects to the receiver card in the hub system’s expansion tower or rack. The term OptiConnect link refers to the fiber optic connection between systems in the OptiConnect cluster. The term path refers to the logical software established connection between two OptiConnect systems. An OptiConnect network always 56 High Availability on the AS/400 System: A System Manager’s Guide consists of at least two AS/400 systems. One of the AS/400 systems is designated as the hub and at least one other system is designated as a satellite. There are two levels of redundancy available in an OptiConnect cluster: • Link redundancy: Link redundancy is an optical bus hardware feature. Any two systems attached to the hub system shared bus can establish a path between them, including paths to the hub system itself. You can establish path redundancy by configuring two hub systems in the OptiConnect cluster. Each satellite uses two buses to connect with two hub systems. OptiConnect software detects the two logical paths between the two systems and uses both paths for data flow. If a path failure occurs, the remaining path picks up all of the communication traffic. • Path redundancy: The OS/400 infrastructure for any system determines the logical path to another system. It does this by designating which system bus each of the systems that form the path uses. The link between any two satellite systems does not depend on the hub system bus. The two systems use the bus, but the hub system is not involved. Link redundancy is determined by the system models. For OptiConnect clusters, link redundancy is always provided when the extra fiber optic cable is installed. For path redundancy, an extra set of OptiConnect receiver cards and an extra expansion tower or frame are required along with another set of cables. OptiConnect for OS/400 is an AS/400-to-AS/400 communication clustering solution. It combines unique OptiConnect fiber bus hardware and standard AS/400 bus hardware with unique software. It uses distributed data management (DDM) to allow applications on one AS/400 system to access databases located on other AS/400 systems. The AS/400 systems that contain the databases are the database servers. The remote systems are considered the application client or clients. In most cases, the hub also acts as the database server. Since all systems can communicate with each other (providing that the hub is active), any system can be the client. Some OptiConnect configurations have AS/400 systems that act simultaneously as a server and a client. However, any system can act as a database server. OptiConnect for OS/400 is a communications vehicle. OptiConnect for OS/400 products provide AS/400 systems with physical links for a high availability clustered solution. OptiConnect for OS/400 components support the infrastructure for applications to conduct data exchanges over high speed connections. OptiConnect for OS/400 does not offer high availability with applications that utilize the hardware links. OptiConnect for OS/400 can be the transport mechanism for in-house developed applications, business partner software, or remote journal support. Further information on OptiConnect is found in 6.8, “Bus level interconnection” on page 82, and 6.8.1, “Bus level interconnection and a high availability solution” on page 84. Hardware support for single system high availability 57 4.11 Cluster support For planned or unplanned outages, clustering and system mirroring offer the most effective solution. For customers requiring better than 99.9% system availability, AS/400 clusters are viable. Cluster solutions connect multiple AS/400 systems together with various interconnect fabrics, including high-speed optical fiber, to offer a solution that can deliver up to 99.99% system availability. High availability is achieved with an alternative system that replicates the availability of the production system. These systems are connected by high-speed communications and use replication software to achieve this. They also require enough DASD to replicate the whole or critical part of the production system. With the entire system replicated, the mirrored system can enable more than just a disaster recovery solution. Combining these clusters with software from AS/400 high-availability business partners (such as those described in Chapter 10, “High availability business partner solutions” on page 111) improves the availability of a single AS/400 system by replicating business data to one or more AS/400 systems. This combination can provide a disaster recovery solution. Clusters are a configuration or a group of independent servers that appear on a network as a single machine. As illustrated in Figure 11, a cluster is a collection of complete systems that work together to provide a single and unified computing resource Figure 11. Cluster definition This cluster group is managed as a single system or operating entity and is designed specifically to tolerate component failures and to support the addition or subtraction of components in a way that is transparent to users. Clusters allow you to efficiently group systems together to set up an environment that provides availability that approaches 100% for critical applications and critical data. Resources can be accessed without regard to location. A client interacts with a cluster as if it were a single system. 58 High Availability on the AS/400 System: A System Manager’s Guide With the introduction of clusters, the AS/400e system offers a continuous availability solution if your business demands operational systems 24 hours a day, 365 days a year (24 x 365). This solution, called OS/400 Cluster Resource Services, is part of the OS/400 operating system. It provides failover and switchover capabilities for your systems that are used as database servers or application servers. If a system outage or a site loss occurs, the functions that are provided on a clusters server system can be switched over to one or more designated backup systems that contain a current copy (replica) of your critical resource. The failover can be automatic if a system failure should happen, or if you can control how and when the transfer takes place by manually initiating a switchover. Cluster management tools control the cluster from anywhere in the network. End users work on servers in the cluster without knowing or caring where their applications are running. In the event of a failure, Cluster Resource Services (CRS), which is running on all systems, provides a switchover. This switch causes minimal impact to the end user or applications that are running on a server system. Data requesters are automatically rerouted to the new primary system. You can easily maintain multiple data replications of the same data. Any AS/400 model that can run OS/400 V4R4 or later is compatible for implementing clustering. You must configure Transmission Control Protocol/Internet Protocol (TCP/IP) on your AS/400e systems before you can implement clustering. In addition, you can purchase a cluster management package from a High Availability Business Partner (HAV BP) that provides the required replication functions and cluster management capabilities. Refer to AS/400 Clusters: A Guide to Achieving Higher Availability, SG24-5194, for further information. 4.12 LPAR hardware perspective Logical partitions allow you to run multiple independent OS/400 instances or partitions. Figure 12 shows a basic LPAR configuration. For V4R5, each partition has has its own processors, memory, and disks. For V5R1, resources can be share between partitions. With logical partitioning, you can address multiple system requirements in a single machine to achieve server consolidation, business unit consolidation, and mixed production and test environments. You can run a cluster environment on a single system image. LPAR support is available on n-way symmetric multiprocessing iSeries models 8xx and AS/400 models 6xx, Sxx, and 7xx. See 7.5, “Logical Partition (LPAR) support” on page 93. Primary partition Development environment #1 SYS1 OS/400 OS/400 Production environment SYS2 PPPPP PP SYS3 Development environment #2 OS/400 P New release testing 840 8-way AS/400 with 3 partitions V4R5 V4R4 V4R5 Figure 12. A basic LPAR configuration Hardware support for single system high availability 59 By itself, LPAR does not provide a significant availability increase. It can, however, be used to complement other availability strategies. See 7.5, “Logical Partition (LPAR) support” on page 93, for a discussion of LPAR from an OS/400 view. 4.12.1 Clustering with LPAR support Since each partition on an LPAR system is treated as a separate server, you can run a cluster environment on a single system image. One cluster node per CPU can exist within one LPAR system. Clustering partitions can provide for a more cost efficient clustering solution than multiple systems. However, an LPAR clustered environment increases single points of failure. For example, if the server’s primary partition becomes unavailable, all secondary partitions also become unavailable (the opposite is not true). In some environments, LPAR ideally lends itself to situations where both a local and remote backup server is desired. A good example is when a business works to provide its own disaster recovery capability. The highest level of availability is obtained with two separate servers. Figure 13 shows that, with clustering active, data is replicated to both the local backup server and the remote server. In the event of a disaster (or the need for the entire local hardware to be powered off), the remote backup server is available. In some cases, this is more cost-efficient (including floor space) than separate servers. Integrated availability options In most cases, it is recommended that integrated availability solutions be used with a cluster to further mask or reduce downtime and to increase a cluster's efficiency. Consider the following list: • Disk protection: Device Parity Protection (RAID-5) and OS/400 Disk Mirroring. • Auxiliary storage pools (ASPs) • Access path protection • Logical Partitions (LPAR) In all cases, it is highly recommended that these integrated availability options be used in a clustered environment, as well as on a standalone iSeries or AS/400 server. Primary partition Production environment SYS1 OS/400 OS/400 Backup environment 2 SYS2 PP PPPPPP Local 8-way iSeries (partitioned) SYS3 Backup environment 3 and remote hot backup for disaster recovery OS/400 PPPP Remote 4-way AS/400 HABP Replication HABP Replication HABP Replication Figure 13. LPAR, local and remote iSeries and AS/400 cluster 60 High Availability on the AS/400 System: A System Manager’s Guide 4.13 UPS An uninterruptible power supply (UPS) provides auxiliary power to the processing unit, disk units, the system console, and other devices that you choose to protect from power loss. When you use a UPS with the AS/400 system, you can: • Continue operations during brief power interruptions (brown outs). • Protect the system from voltage peaks (white outs). • Provide a normal end of operations that reduces recovery time when the system is restarted. If the system abnormally ends before completing a normal end of operations, recovery time is significant. Normally, a UPS does not provide power to all local workstations. The UPS also usually does not provide power to modems, bridges, or routers that support remote workstations. Consider supplying alternate power to both workgroups since the inability of worker access to information disrupts productivity. You can avoid such disruption with proper availability and recovery implementation. Also, design your interactive applications to handle the loss of communication with a workstation. Otherwise, system resources are used in an attempt to recover devices that have lost power. Refer to Chapter 12, “Communications Error Recovery and Availability” in The System Administrator’s Companion to AS/400 Availability and Recovery, SG24-2161, for more information on resources used during device recovery. The programming language reference manuals provide examples of how to use the error feedback areas to handle workstations that are no longer communicating with the application. Backup and Recovery, SC41-5304, describes how to develop programs to handle an orderly shutdown of the system when the UPS takes over. 4.14 Battery backup Most (but not all) AS/400 models are equipped with a battery backup. Based on the system storage size, relying on a battery backup for enough time for an orderly shutdown is not sufficient. The battery capacity typically varies between 10 and 60 minutes. The useful capacity depends on the application requirements, main storage size, and system configuration. Consider the reduction of capacity caused by the natural aging of the battery and environmental extremes of the site when selecting the battery. The battery must have the capacity to maintain the system load requirements at the end of its useful life. Refer to Backup and Recovery, SC41-5304, for power down times for the advanced series systems. Refer to the AS/400 Physical Planning Reference, SA41-5109, for power down times for the AS/400 Bnn-Fnn models. Also, refer to the Physical Planning Reference for later AS/400 models at the Web site: http://www.as400.ibm.com/tstudio/planning/index.rf.htm Hardware support for single system high availability 61 4.15 Continuously powered main storage On V3R6 systems and later, AS/400 systems are equipped with a System Power Control Network (SPCN) feature. This provides the Continuously Powered Main Storage (CPM) function. During a power fluctuation, the transition to CPM mode is 90 seconds after an initial 30 second waiting period. The internal battery backup provides sufficient power to keep the AS/400 system up for the 120 seconds until the transition to the CPM is complete. With CPM enabled, the battery provides sufficient power to shut down the system and maintain the contents of memory for up to 48 hours after a power loss without user interface or control. The transition to CPM is irreversible. CPM interrupts the processes at the next microcode end statement and forces as many updates to the disk as it can. During the next IPL, it restores main storage and attempts to complete outstanding updates. Preserving main storage contents significantly reduces the amount of time the system requires to perform an IPL after a power loss. CPM operates outside of transaction boundaries. You can use the CPM feature along with a UPS (or the battery backup). If the system detects that the UPS can no longer provide sufficient power to the system, the data currently in memory is put into “sleep” mode. The CPM storage feature takes control and maintains data in memory for up to 48 hours. With the CPM feature, the system automatically initiates an IPL after power is restored. CPM is a viable feature. Choosing to use CPM depends on your expectations of your local power and battery backup or generator to maintain power at all times. Refer to Backup and Recovery, SC41-5304, for more information on CPM requirements. 4.16 Tape devices For information on what tape devices are available for each AS/400 model, and the hardware and software requirements to support each model, refer to the iSeries Handbook, GA19-5486, and iSeries and AS/400e System Builder, SG24-2155. For save and restore performance rates, see Appendix C, “Save and Restore Rates of IBM Tape Drives for Sample Workloads”, and Section 8.1, “Save and Restore Performance” in the AS/400 Performance Capabilities Manual at: http://publib.boulder.ibm.com/pubs/pdfs/as400/V4R5PDF/AS4PPCP3.PDF 4.16.1 Alternate installation device On V4R1 (and later) systems, you can use a combination of devices that are attached on the first system bus, as well as additional buses. The alternate installation device does not need to be attached to the first system bus. For example, the 3590 tape drive can be positioned up to 500 meters or two kilometers away. This enables a physical security improvement since users who are allowed access to the machine room may be different than those operating the tape drives. 62 High Availability on the AS/400 System: A System Manager’s Guide You can select an alternate installation device connected through any I/O bus attached to the system. When you perform a D-mode IPL (D-IPL), you can use the tape device from another bus using the Install Licensed Internal Code display. For example, if you have a 3590 attached to another bus (other than Bus 1), you can choose to install from the alternate installation device using the Install Licensed Internal Code display and then continue to load the LIC, OS/400, and user data using the alternate installation device. Note: Set up alternate installation device support prior to performing a D-IPL. System Licensed Internal Code (SLIC) media is necessary to perform the D-IPL that restores and installs from the tape device. Some models (typically with 3590 tape devices attached) experience a performance improvement when using an alternate installation device for other save and restore or installation operations. This is caused by having the tape drive on a different IOP than the one to which the load source unit is attached. On systems prior to V4R1, the alternate installation device is only supported using devices attached to the first system bus. The first system bus connects to the service processor IOP. Typically, this is where the optical or tape devices used for installations are attached. Before using the alternate installation device, ensure that it is defined on a bus other than system Bus 1. You must enable the device. When installing from the alternate installation device, you need both your tape media and the CD-ROM media containing the Licensed Internal Code. Recommendation © Copyright IBM Corp. 2001 63 Chapter 5. Auxiliary storage pools (ASPs) An auxiliary storage pool (ASP) is a software definition of a group of disk units on your system. This means that an ASP does not necessarily correspond to the physical arrangement of disks. Conceptually, each ASP on your system is a separate pool of disk units for single-level storage. The system spreads data across the disk units within an ASP. If a disk failure occurs, you need to recover only the data in the ASP that contained the failed unit. Prior to V5R1, here are two types of ASPs: • System auxiliary storage pool • User auxiliary storage pools Your system may have many disk units attached to it that are optionally assigned to an auxiliary storage pool. To your system, the pool looks like a single unit of storage. The system spreads objects across all disk units. You can use auxiliary storage pools to separate your disk units into logical subsets. When you assign the disk units on your system to more than one ASP, each ASP can have different strategies for availability, backup and recovery, and performance. ASPs provide a recovery advantage if the system experiences a disk unit failure resulting in data loss. If this occurs, recovery is only required for the objects in the ASP that contained the failed disk unit. System objects and user objects in other ASPs are protected from the disk failure. There are also additional benefits and certain costs and limitations that are inherent in using ASPs. 5.1 Deciding which ASPs to protect Because mirrored protection is configured by auxiliary storage pool, the ASP is the user’s level of control over single-level storage. Mirrored protection can be used to protect one, some, or all ASPs on a system. However, multiple ASPs are not required in order to use mirrored protection. Mirrored protection works well when all disk units on a system are configured into a single ASP (the default on the AS/400 system). In fact, mirroring reduces the need to partition auxiliary storage into ASPs for data protection and recovery. However, ASPs may still be recommended for performance and other reasons. To provide the best protection and availability for the entire system, mirror all ASPs in the system. Consider the following situations: • If the system has a mixture of some ASPs with and without mirrored protection, a disk unit failure in an ASP without mirrored protection severely limits the operation of the entire system. Data can be lost in the ASP in which the failure occurred. A long recovery may be required. Independent ASPs are introduced at V5R1. At the time this Redpaper was written, the appropriate information was not available. Note 64 High Availability on the AS/400 System: A System Manager’s Guide • If a disk fails in a mirrored ASP, and the system also contains ASPs that are not mirrored, data is not lost. However, in some cases, concurrent maintenance may not be possible. The disk units that are used in user ASPs should be selected carefully. For best protection and performance, an ASP should contain disk units that are attached to several different I/O processors. The number of disk units in the ASP that are attached to each I/O processor should be the same (that is, balanced). ASPs are further discussed in Chapter 5, “Auxiliary storage pools (ASPs)” on page 63. 5.1.1 Determining the disk units needed A mirrored ASP requires twice as much auxiliary storage as an ASP that is not mirrored. This is because the system keeps two copies of all the data in the ASP. Also, mirrored protection requires an even number of disk units of the same capacity so that disk units can be made into mirrored pairs. On an existing system, note that it is not necessary to add the same types of disk units already attached to provide the required additional storage capacity. Any new disk units may be added as long as sufficient total storage capacity and an even number of storage units of each size are present. The system assigns mirrored pairs and automatically moves the data as necessary. If an ASP does not contain sufficient storage capacity, or if storage units cannot be paired, mirrored protection cannot be started for that ASP. The process of determining the disk units needed for mirrored protection is similar for existing or new systems. Review the following points to plan disk requirements: 1. Determine how much data each ASP contains. 2. Determine a target percent of storage used for the ASP (how full the ASP will be). 3. Plan the number and type of disk units needed to provide the required storage. For an existing ASP, you can plan a different type and model of disk unit to provide the required storage. After planning for all ASPs is complete, plan for spare units, if desired. Once you know all of this information, you can calculate your total storage needs. The planned amount of data and the planned percent of storage used work together to determine the amount of actual auxiliary storage needed for a mirrored ASP. For example, if an ASP contains 1 GB (GB equals 1,073,741,824 bytes) of actual data, it requires 2 GB of storage for the mirrored copies of the data. If 50% capacity is planned for that ASP, the ASP needs 4 GB of actual storage. If the planned percent of storage used is 66%, 3 GB of actual storage is required. One gigabyte of real data (2 GB of mirrored data) in a 5 GB ASP results in a 40% auxiliary storage utilization. Total planned storage capacity needs After planning for the number and type of storage units needed for each ASP on the system, and for any spare storage units, add the total number of storage units of each disk unit type and model. Auxiliary storage pools (ASPs) 65 The number planned is the number of storage units of each disk unit type, not the number of disk units. The following section provides a more detailed description. 5.2 Assigning disk units to ASPs If you decide that you want more than one auxiliary storage pool (ASP), make the following determinations for each ASP: • How much storage do you need? • What disk protection (if any) should you use? • Which disk units should be assigned? • Which objects should be placed in the ASP? The Workstation Customization Programming book, SC41-5605, provides information to help you with these considerations. This book is only available online at the AS/400 Library at: http://as400bks.rochester.ibm.com At the site, click AS/400 Information Center. Select your language and click GO! Click V4R4 and then click Search or view all V4R4 books. Enter the book number in the seach field and click Find. Finally, click the appropriate publication that appears. When you work with disk configuration, you may find it helpful to begin by making a list of all the disks and disk-related components on your system. You can put this information in a chart like Table 3, or you may want to draw a diagram. Table 3. Disk configuration example chart 5.3 Using ASPs User ASPs are used to manage the following system performance and availability requirements: • Provide dedicated resources for frequently used availability objects, such as journal receivers • Allow online and unattended saves. • Place infrequently used objects, such as large history files, on disk units with slower performance IOP Controller Unit Type and model Type and model Capacity Resource name Name of mirrored pair 1 00 01 1 6602-030 1031 1 DD001 1 10 01 2 6602-074 733 1 DD019 1 10 02 3 6602-070 1031 1 DD036 1 00 02 6 6602-030 1031 1 DD002 1 10 03 4 6602-074 773 3 DD005 1 10 04 5 6602-074 773 3 DD033 66 High Availability on the AS/400 System: A System Manager’s Guide 5.3.1 Using ASPs for availability Different parts of your system may have different requirements for availability and recovery. For example, you may have a large history file that is changed only at the end of the month. The information in the file is useful but not critical. You may put this file in a separate library in a user ASP that does not have any disk protection (mirrored protection or device parity protection). You could omit this library from your daily save operations and choose to save it only at the end of the month when it is updated. Another example would be documents and folders. Some documents and folders are critical to the organization and should be protected with device parity protection or mirrored protection. They can be put in a protected user ASP. Others are kept on the system to provide information but do not change very often. They can be in a different user ASP with a different strategy for saving and for protection. 5.3.2 Using ASPs to dedicate resources or improve performance If you are using user ASPs for better system performance, consider dedicating the ASP to one object that is very active. In this case, you can configure the ASP with only one disk unit. However, it usually does not improve performance to place a single device-parity protected unit in a user ASP because the performance of that unit is affected by other disk units in the device parity set. Refer to Figure 14 for a visual example of multiple ASPs. Figure 14. Auxiliary storage pools Allocating one user ASP exclusively for journal receivers that are attached to the same journal can improve journaling performance. By having the journal and database files in a separate ASP from the attached receivers, there is no contention for journal receiver write operations. The units that are associated with the ASP do not have to be repositioned before each read or write operation. Journaling uses as many as 10 disk arms when writing to a journal receiver. Configuring an ASP with more than 10 arms does not provide any additional performance advantage for journaling. However, if you do have an ASP with more than 10 arms, journaling uses the 10 fastest arms. If you add more disk units to Load Source System ASP User ASP used for save/archive on compressed DASD User ASP used for journal receivers ASP 1 ASP 2 ASP 3 Auxiliary storage pools (ASPs) 67 the ASP while the system is active, the system determines whether to use the new disk units for journal receivers the next time the change journal function is performed. Another method for improving performance is to make sure that there are enough storage units in the user ASP to support the number of physical input and output operations that are done against the objects in the user ASP. You may have to experiment by moving objects to a different user ASP and then monitor performance in the user ASP to see if the storage units are used excessively. If the units show excessive use, you should consider adding more disk units to the user ASP. 5.3.3 Using ASPs with document library objects You can place document library objects (DLOs) in user ASPs. The possible advantages of placing DLOs in user ASPs are: • The ability to reduce save times for DLOs and to separate them by their save requirements. • The ability to separate DLOs by availability requirements. Critical DLOs can be placed in user ASPs that are protected by mirrored protection or device parity protection. DLOs that change infrequently can be placed in unprotected ASPs with slower drives. • The ability to grow to a larger number of documents. If you have V3R7 or a later release of the OS/400 licensed program, you can run multiple SAVDLO or RSTDLO procedures against different ASPs. If you have V4R1 or a later release of the OS/400 licensed program, you can run multiple SAVDLO operations on the same ASP. One approach for placing DLOs in user ASPs is to leave only system DLOs (IBM-supplied folders) in the system ASP. Move other folders to user ASPs. The system folders do not change frequently, so they can be saved infrequently. You can specify an ASP on the SAVDLO command. This allows you to save all the DLOs from a particular ASP on a given day of the week. For example, you could save DLOs from ASP 2 on Monday, DLOs from ASP 3 on Tuesday, and so on. You could save all changed DLOs daily. The recovery steps if you use this type of save technique would depend on what information was lost. If you lost an entire ASP, you would restore the last complete saved copy of DLOs from that ASP. You would then restore the changed DLOs from the daily saves. When you save DLOs from more than one ASP in the same operation, a different file and a sequence number are created on the tape for each ASP. When you restore, you must specify the correct sequence number. This makes it simple to restore the changed DLOs only to the ASP that was lost without needing to know all the folder names. These restrictions and limitations apply when placing DLOs in user ASPs: • When using a save file for a save operation, you can save DLOs from only one ASP. • When using an optical file for a save operation, you can save DLOs from only one ASP. 68 High Availability on the AS/400 System: A System Manager’s Guide • If you are saving to a save file and you specify SAVDLO DLO(*SEARCH) or SAVDLO DLO(*CHG), you must also specify an ASP, even if you know the results of you search are found in a single ASP. • Documents that are not in folders must be in the system ASP. • Mail can be filed into a folder on a user ASP. Unfiled mail is in the system ASP. Note: When you specify DLO(*SEARCH) or DLO(*CHG) for the SAVDLO command, specify an ASP, if possible. Specifying an ASP saves system resources. 5.3.4 Using ASPs with extensive journaling If journals and files being journaled are in the same ASP as the receivers and the ASP overflows, you must end journaling of all files and recover from the overflowed condition for the ASP. Backup and Recovery describes how to recover an overflowed ASP. If the journal receiver is in a different ASP than the journal, and the user ASP that the receiver is in overflows, perform the following steps: 1. Create a new receiver in a different user ASP. 2. Change the journal (CHGJRN command). 3. Save the detached receiver. 4. Delete the detached receiver. 5. Clear the overflowed ASP without ending journaling. 6. Create a new receiver in the cleared ASP. 7. Attach the new receiver with the CHGJRN command. 5.3.5 Using ASPs with access path journaling If you plan to use explicit access path journaling, IBM recommends that you first change the journal to a journal receiver in the system ASP (ASP 1) for a few days. Start access path journaling to see storage requirements for the receiver before you allocate the specific size for a user ASP. 5.3.6 Creating a new ASP on an active system Beginning with V3R6 of the OS/400 licensed program, you can add disk units while your system is active. When you add disk units to an ASP that does not currently exist, the system creates a new ASP. If you choose to create a new user ASP while your system is active, be sure you understand the following considerations: • You cannot start mirrored protection while the system is active. The new ASP is not fully protected unless all of the disk units have device parity protection. • You cannot move existing disk units to the new ASP while your system is active.The system must move data when it moves disk units. This can be done only through Dedicated Service Tools (DST). • The system uses the size of an ASP to determine the storage threshold for the journal receivers that are used by system-managed access-path protection (SMAPP). When you create an ASP while your system is active, the size of the disk units that you specify on the operation that creates the ASP is considered the size of the ASP for SMAPP. For example, assume that you add two disk units to a Auxiliary storage pools (ASPs) 69 new ASP (ASP 2). The total capacity of the two disk units is 2,062 MB. Later, you add two more disk units to increase the capacity to 4,124 MB. For the purposes of SMAPP, the size of the ASP remains 2,062 MB until the next time you perform an IPL. This means that the storage threshold of your SMAPP receivers is lower and the system must change receivers more often. Usually, this does not have a significant impact on system performance. The system determines the capacity of every ASP when you perform an IPL. At that time, the system makes adjustments to its calculations for SMAPP size requirements. 5.3.7 Making sure that your system has enough working space When you make changes to your disk configuration, the system may need working space. This is particularly true if you plan to move disk units from one ASP to another. The system needs to move all the data from the disk unit to other disk units before you move it. There are system limits for the amount of auxiliary storage. If your system does not have sufficient interim storage, begin by cleaning up your disk storage. Many times, users keep objects on the system, such as old spooled files or documents, when these objects are no longer needed. Consider using the automatic cleanup function of Operational Assistant to free some disk space on your system. If cleaning up unnecessary objects in auxiliary storage still does not provide sufficient interim disk space, another alternative is to remove objects from your system temporarily. For example, if you plan to move a large library to a new user ASP, you can save the library and remove it from the system. You can then restore the library after you have moved disk units. Here is an example of how to accomplish this: 1. Save private authorities for the objects on your system by typing SAVSECDTA DEV (tape-device). 2. Save the object by using the appropriate SAVxxx command. For example, to save a library, use the SAVLIB command. Consider saving the object twice to two different tapes. 3. Delete the object from the system by using the appropriate DLTxxx command. For example, to delete a library, use the DLTLIB command. 4. Recalculate your disk capacity to determine whether you have made sufficient interim space available. 5. If you have enough space, perform the disk configuration operations. 6. Restore the objects that you deleted. 5.3.8 Auxiliary storage pools: Example uses The following list explains how ASPs are used to manage system performance and backup requirements: • You can create an ASP to provide dedicated resources for frequently used objects, such as journal receivers. 70 High Availability on the AS/400 System: A System Manager’s Guide • You can create an ASP to hold save files. Objects can be backed up to save files in a different ASP. It is unlikely that both the ASP that contains the object and the ASP that contains the save file will be lost. • You can create different ASPs for objects with different recovery and availability requirements. For example, you can put critical database files or documents in an ASP that has mirrored protection or device parity protection. • You can create an ASP to place infrequently used objects, such as large history files, on disk units with slower performance. • You can use ASPs to manage recovery times for access paths for critical and noncritical database files using system-managed access-path protection. 5.3.9 Auxiliary storage pools: Benefits Placing objects in user ASPs can provide several advantages, including: • Additional data protection: By separating libraries, documents, or other objects in a user ASP, you protect them from data loss when a disk unit in the system ASP or other user ASPs fails. For example, if you have a disk unit failure, and data contained on the system ASP is lost, objects contained in user ASPs are not affected and can be used to recover objects in the system ASP. Conversely, if a failure causes data that is contained in a user ASP to be lost, data in the system ASP is not affected. • Improved system performance: Using ASPs can also improve system performance. This is because the system dedicates the disk units that are associated with an ASP to the objects in that ASP. For example, suppose you are working in an extensive journaling environment. Placing libraries and objects in a user ASP can reduce contention between the journal receivers and files if they are in different ASPs. This improves journaling performance. However, placing many active journal receivers in the same user ASP is not productive. The resulting contention between writing to more than one receiver in the ASP can slow system performance. For maximum performance, place each active journal receiver in a separate user ASP. • Separation of objects with different availability and recovery requirements: You can use different disk protection techniques for different ASPs. You can also specify different target times for recovering access paths. You can assign critical or highly used objects to protected high-performance disk units. You may assign large low-usage files, like history files, to unprotected low-performance disk units. 5.3.10 Auxiliary storage pools: Costs and limitations There are some specific limitations that you may encounter when using auxiliary storage pools (ASPs): • The system cannot directly recover lost data from a disk unit media failure. This situation requires you to perform recovery operations. • Using ASPs can require additional disk devices. • Using ASPs requires you to manage the amount of data in an ASP and avoid an overflowed ASP. • You need to perform special recovery steps if an ASP overflows. • Using ASPs requires you to manage related objects. Some related objects, such as journals and journaled files, must be in the same ASP. Auxiliary storage pools (ASPs) 71 5.4 System ASP The system automatically creates the system ASP (ASP 1). This contains disk Unit 1 and all other configured disks that are not assigned to a user ASP. The system ASP contains all system objects for the OS/400 licensed program and all user objects that are not assigned to a user ASP. Note: You can have disk units that are attached to your system but are not configured and are not being used. These are called non-configured disk units. There are additional considerations that you should be aware of regarding the capacity of the system ASP and protecting your system ASP. These are explained in the following sections. 5.4.1 Capacity of the system ASP If the system ASP fills to capacity, the system ends abnormally. If this occurs, you must perform an IPL of the system and take corrective action (such as deleting objects) to prevent this from re-occurring. You can also specify a threshold that, when reached, warns the system operator of a potential shortage of space. For example, if you set the threshold value at 80 for the system ASP, the system operator message queue (QSYSOPR) and the system message queue (QSYSMSG) are notified when the system ASP is 80% full. A message is sent every hour until the threshold value is changed, or until objects are deleted or transferred out of the system ASP. If you ignore this message, the system ASP fills to capacity and the system ends abnormally. A third method for preventing the system ASP from filling to capacity is to use the QSTGLOWLMT and QSTGLOWACN system values. 5.4.2 Protecting your system ASP IBM recommends that you use device parity protection or mirrored protection on the system ASP. Using disk protection tools reduces the chance that the system ASP will lose all data. If the system ASP is lost, addressability to objects in every user ASP is also lost. You can restore the addressability by restoring the entire system or by running the Reclaim Storage (RCLSTG) command. However, the RCLSTG command cannot recover object ownership. After you run the command, the QDFTOWN user profile owns all objects found without ownership intact. You can use the Reclaim Document Library Object (RCLDLO) command procedure to recover ownership of document library objects. 5.5 User ASPs Grouping a set of disk units together and assigning that group to an auxiliary storage pool (ASP) creates a user ASP. You can configure user ASPs 2 through 16. They can contain libraries, documents, and certain types of objects. There are two types of user ASPs: • History file • Non-library user ASPs Once you have ASPs configured, you should protect them by using mirroring or device parity protection. 72 High Availability on the AS/400 System: A System Manager’s Guide 5.5.1 Library user ASPs Library user ASPs contain libraries and document library objects (DLOs). It is recommended that you use library user ASPs because the recovery steps are easier than with non-library user ASPs. You should be familiar with the following regulations regarding library user ASPs: • Do not create system or product libraries (libraries that begin with a Q or #)or folders (folders that begin with a Q) in a user ASP. Do not restore any of these libraries or folders to a user ASP. Doing so can cause unpredictable results. • Library user ASPs may contain both libraries and document library objects. The document library for a user ASP is called QDOCnnnn (here, nnnn is the number of the ASP). • Journals and files that are being journaled must be in the same ASP. Place the journal receivers in a different ASP. This protects against the loss of the files and the receivers if a disk media failure occurs. • Journaling cannot be started on an object (STRJRNPF or STRJRNAP command) if the journal (object type *JRN) and the object to be journaled are in different ASPs. • Journaling cannot be started again for a file that is saved and then restored to a different ASP that does not contain the journal. The journal and the file must be in the same ASP for journaling to be automatically started again for the file. • No database network can cross ASP boundaries. • You cannot create a file in one ASP that depends on a file in a different ASP. All based-on physical files for a logical file must be in the same ASP as the logical file. The system builds access paths only for database files in the same ASP as the based-on physical file (temporary queries are not limited). Access paths are never shared by files in different ASPs. Record formats are not shared between different ASPs. Instead, a format request is ignored and a new record format is created. • You can place an SQL collection in a user ASP. You specify the target ASP when you create the collection. • If the library user ASP does not contain any database files, set the target recovery time for the ASP to *NONE. This would be true, for example, if the library user ASP contains only libraries for journal receivers. If you set the access path recovery time to *NONE, this prevents the system from doing unnecessary work for that ASP. The Backup and Recovery Guide, SC41-5304, describes how to set access path recovery times. Non-library user ASPs Non-library user ASPs contain journals, journal receivers, and save files with libraries that are in the system ASP. If you are assigning access path recovery times for individual ASPs, you should set the target recovery time for a non-library user ASP to *NONE. A non-library user ASP cannot contain any database files and cannot, therefore, benefit from SMAPP. If you set an access path recovery time for a non-library user ASP to a value other than *NONE, the system performs extra work with no possible benefit. Backup and Recovery Guide, SC41-5304, describes how to set access path recovery times. Auxiliary storage pools (ASPs) 73 Using ASPs can require protecting user ASPs. Keep the following points in mind regarding user ASP protection: • All ASPs, including the system ASP, should have mirrored protection or consist entirely of disk units with device parity protection to ensure that the system continues to run after a disk failure in an ASP. • If a disk failure occurs in an ASP that does not have mirrored protection, the system may not continue to run. This depends on the type of disk unit and the error. • If a disk failure occurs in an ASP that has mirrored protection, the system continues to run (unless the both storage units of a mirrored have failed). • If a disk unit fails in an ASP that has device parity protection, the system continues running as long as no other disk unit in the same device parity set fails. • System limits are set for auxiliary storage. During an IPL, the system determines how much auxiliary storage is configured on the system. The total amount is the sum of the capacity of the configured units and their mirrored pairs (if any). Disk units that are not configured are not included. The amount of disk storage is compared to the maximum that is supported for a particular model. • If more than the recommended amount of auxiliary storage is configured, a message (CPI1158) is sent to the system operator’s message queue (QSYSOPR) and the QSYSMSG message queue (if it exists on the system). This message indicates that too much auxiliary storage exists on the system. This message is sent once during each IPL as long as the amount of auxiliary storage on the system is more than the maximum amount supported. 74 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 75 Chapter 6. Networking and high availability One of the major items to consider for availability is the network. When planning for a network, capacity and accessibility are addressed, just as capacity and accessibility are planned for the system itself. Your company needs a stable computing environment to fuel its business growth and to sharpen its advantage in a competitive marketplace. When major hubs or routers are down, users have difficulty accessing key business applications. The ability to carry on business, such as enrolling new members to a bank, or to respond to member inquiries, is impacted. To minimize this effect, aim for stringent metrics (for example, 99.5 percent availability of the operational systems). Ultimately, the network must be sound and recoverable to support core business applications. Employ a carefully planned comprehensive network management solution that is: • Scalable and flexible, no matter how complicated the task • Provides around-the-clock availability and reliability of applications to users, no matter where they’re located • Capable of building a solid network foundation, no matter how complex the system This chapter comments on the various network components of an HA solution and how they can affect the overall availability. 6.1 Network management Networks typically comprise a wide variety of devices, such as hubs, routers, bridges, switches, workstations, PCs, laptops, and printers. The more different components and protocols there are in a network, the more difficult it is to manage them. Problem detection and response can be at a local or remote level. Tools can help you correct problems at the source before users are affected and minimize downtime and performance problems. Management tools provide a detailed view of the overall health and behavior of your network's individual elements. These elements, such as routers, servers, end systems, data, event flow, and event traffic are useful to record. When network problems occur, use management tools to quickly identify and focus on the root cause of the error. The focal point of control is where all network information is viewed to provide a complete picture of what is happening. This console or server provides monitoring and alert actions, as well as maintenance capabilities. With proper network management, the time to respond to and resolve a network error is reduced. 76 High Availability on the AS/400 System: A System Manager’s Guide 6.2 Redundancy Availability is measured as the percentage of time that online services for a critical mass of end users can function at the end-user level during a customer's specified online window. When systems are combined into a cluster, the supporting network must be able to maintain the same level of availability as the cluster nodes. Communication providers must deliver guarantees that they will be available for, and have sufficient capacity for, all possible switch scenarios. There must be alternate network paths to enable the cluster services to manage the cluster resources. Redundant paths can prevent a cluster partition outage from occurring. Redundant network connections are illustrated in Figure 15. Figure 15. Redundant network connections As shown in Figure 15, the adapter card can be a LAN, WAN, or ATM adapter. Duplicates are installed, as well as duplicate memory and disk units. Indeed, System A serves as a duplicate system to System B, and vice versa. When a network connection between two machines that are part of the business process does not function, it does not necessarily mean the business process is in danger. If there is an alternative path and, in case of malfunctioning, the alternative path is taken automatically, monitoring and handling these early events assures a minimum breakage. 6.3 Network components Protocol, physical connection, the hardware and software needed, the backup options available, and network design are all factors to consider for a highly available network. Memory card Memory card Processor Card Memory card Memory card Adapter Card (redundant) Adapter Card (active) Processor Card System B Processor LAN, WAN, or ATM cards Mirrored disks Mirrored disks System A Processor Network Adapter Card (redundant) Adapter Card (active) Networking and high availability 77 Components involved in a network include: • Protocols: – Bisynchronous – Asynchronous – Systems Network Architecture (SNA): The predominate protocol for connecting terminals to host systems. SNA has strong support for congestion control, flow control, and traffic prioritization. It is known to provide stable support for large and complex networks. However, SNA is not capable of being routed natively across a routed network. Therefore, it must be encapsulated to flow across such a network. – Transmission Control Protocol/Internet Protocol (TCP/IP): TCP/IP is the protocol used to build the world’s largest network, the Internet. Unlike SNA, all hosts are equal in TCP/IP. The protocol can be carried natively through a routing network. TCP/IP is the de facto published standard. This allows many vendors to interoperate between different machines. Poor congestion control, flow control, traffic prioritization, and a lack of controls makes it difficult to guarantee a response time over a wide area network. – Internetwork Packet Exchange (IPX) was developed for NetWare: NetWare is a network operating system and related support services environment introduced by Novell, Inc. It is composed of several different communication protocols. Services are provided in a client/server environment in which a workstation (client) requests and receives the services that are provided by various servers on the network. Since this is usually a secondary protocol, the requirements need to be analyzed. However, they may need to be considered for transport only. – Communication lines: Communication lines support the previously described protocols. Special interfaces may be required to connect to the AS/400e system, remote controllers, personal computers, or routers. The type and speed of the line depends on the requirement of performance and response times. Considerations for such facilities include: • Leased: Private set of wires from the telephone company. These wires can be: – Point-to-point: Analog or digital – Multi-point: Phone company provides the connection of multiple remote sites through the central telephone offices • Integrated Services Digital Network (ISDN): Basic or primary access • Frame relay: An internal standard. The re-routing of traffic is the responsibility of the frame relay provider. For a fail-safe network, all endpoints should have a dial backup or ISDN connection. • Virtual Private Networks (VPN): Utilize a service provider that provides encryption and uses the Internet network for the transmission of data to the other site. VPN allows secure transfers over TCP/IP (IPSec) and multiprotocol (L2TP) networks. • Multiprotocol Switched Services: ATM or LAN • Hardware: A factor that may limit the speed in which communication lines are connected. These factors include: 78 High Availability on the AS/400 System: A System Manager’s Guide – Controllers: Manage connections for legacy terminals and personal computers with twinax support cards – Bridge – Routers: Allows encapsulation of SNA data into TCP/IP and routes it over a communication network – Frame Relay Access Devices (FRADs): Used to buffer SNA devices from a frame relay network. It also channels SNA, BSC, Asynch, and multiprotocol LAN traffic onto a single frame relay permanent virtual circuit and eliminates the need to have separate WAN links for traditional and LAN traffic. – Terminal Adapter: Used in an ISDN network to buffer the device from the physical connection – Modems: Used for analog lines – DSU and CSU: Used for digital lines – Switches: WAN switches for networks carrying high volumes of traffic at a high speed. These are networks that cope with both legacy and new emerging applications in a single network structure. The network mixes data, voice, image, and video information, and transports different traffic types over a single bandwidth channel to each point. • Software: When the network consists of LANS, routers, and communication lines, a management tool such as Operation Control/400 with Managed System Services/400, Nways Workgroup Manager, or IBM Netfinity is advised to help manage the network. From routers to remote controllers, review all network components. If there are critical requirements for remote access, you must provide alternative network paths. This can be as simple as a dialup link or as complex as a multi-path private network. 6.4 Testing and single point of failure The problem with managing a complex computing system is that anything can go wrong with any components at any time. In any development shop, applications are tested. Network, hardware, and external link recovery should be tested with the same emphasis. Just as planning for redundancy, recovery, are a must for high availability, so is testing possible scenarios. In a highly available environment, testing is particularly more critical. By employing a highly available solution, a business makes a serious commitment to exceptional levels of reliability. These levels not only rely on hardware and software, but they can only be achieved with stringent problem and change management. Identified problems must be replicated and a change must be put in place and tested before further presenting a further production risk. Testing should involve obvious things (such as hardware, network, and applications), and less obvious components (such as the business processes associated with the computer systems, facilities, data, and applications from external providers, as well as accurately documented job responsibilities). Consider your implementation test environment and your ongoing problem or change test environment when implementing a highly available solution. A Networking and high availability 79 second system or separate LPAR partition can be used for operating system and application test environments. Figure 16 illustrates a very simple HA environment. There is a two-node cluster or replication environment. A separate system is available for development that has no links to the cluster. Development cannot test changes in a clustered environment. To enable tested changes to be made to the cluster, a fourth system is added. Changes can then be tested on the development systems before moving these changes into production. Figure 16. Cluster test scenario Note: Figure 16 illustrates a basic customer setup. The routers, LANs, WANs, etc., required to simulate the real environment are not shown. Creating a separate cluster with two smaller systems meets most testing needs. However, some hardware or peripherals may not work with smaller systems. The production environment can only truly be tested with actual and careful function duplication. Testing needs to involve a planned switchover, a failover (unplanned switchover), the rejoin of the systems, and adding a system to the cluster. Be sure to re-test these scenarios after each system upgrade and any major change to any of the cluster components. A single network adapter card in a server system that works in a client/server environment is a single-point-of-failure (SPOF) for this server. Likewise, a single SCSI adapter connecting to an external storage system is a SPOF. If a complete server fails within a group of several servers, and the failed server cannot be easily and quickly replaced by another server, this server is an SPOF for the server group or cluster. The straightforward solution is that adapter cards can be made redundant by simply doubling them within a server and making sure a backup adapter becomes active if the primary one fails. System B Production System C Backup System D Development 2 System A Development 1 80 High Availability on the AS/400 System: A System Manager’s Guide CPU, power supplies, and other parts can be made redundant within a server too. This requires that the backup adapter becomes active if the primary one fails. These components can also be made redundant by requiring special parts that are not very common in the PC environment and thus quite expensive. However, in cooperation with a software agent (the HA software), two or more servers in an HA cluster can be set up to replace each other in case of a node outage. When cluster nodes are placed side-by-side, a power outage or a fire can affect both. Consequently, an entire building, a site, or even a town (earthquakes or other natural disasters) can be an SPOF. Such major disasters can be handled by simply locating the backup nodes at a certain distance from the main site. This can be very costly and IT users must be careful to evaluate their situation and decide which SPOFs to cover. General considerations for testing a high availability environment are described in Appendix F, “Cost components of a business case” on page 169. 6.5 Hardware switchover A hardware switchover may occur in case of a planned role swap or an unplanned role swap (for example, a system outage). The role swap typically involves 5250 device switching from the failed system to the backup system. Note: A hardware switchover can also be called a role swap. Figure 17 shows an example IP address assignment for two systems in a cluster preparing for a hardware switchover. Figure 17. IP address takeover between two systems in a cluster In a clustered environment, the typical tasks for enabling a hardware switchover are: 1. Quiesce the system: a. Remove users from the production system. b. Ensure all jobs in the production job queues have completed. c. Hold those job queues. Production system   1.3.22.426 1.3.22.427 1.3.22.322 1.3.22.327 Server 1 Server 2 1.3.22.120 (Inactive) IP Address takeover 1.3.22.120 (Active) Networking and high availability 81 d. Check synchronization of database transactions. e. End subsystems. 2. End high availability applications on the source machine. Make sure all journal receiver entries are complete. 3. End high availability applications on the target machine. 4. Switch between the source and target systems. 5. Switch the network to the target system. 6. Start application and transaction mirroring in reverse mode on the target system. 7. Connect users to the target system. 8. Make necessary changes and updates to the source system and start that system. 9. Switch the roles of the source and target systems again. 10.Switch the network. 11.Start mirroring in normal model. Note: Switchover capabilities are enhanced at V5R1. However, they are not covered in this Redpaper. 6.6 Network capacity and performance As business requirements have evolved, a greater dependency has been put on information technology (IT) strategies to remain competitive in the marketplace. More and more, the marketplace means an e-business environment. Network performance directly corresponds to business performance. This increased reliance on the network creates the need for high availability within the infrastructure. New and evolving applications, such as interactive white boarding, video conferences, and collaborative engineering, have significantly increased the need for bandwidth capacity. In addition to bandwidth requirements, these applications create very different traffic patterns from traditional client/server applications. Many of these new applications combine voice, video, and data traffic and have driven the convergence of these infrastructures. As this convergence occurs, the volume of time-sensitive traffic increases. Instantaneous rerouting of traffic must occur to maintain the integrity of the application. Reliability and fault tolerance are critical to maintain continuous network operations. 6.7 HSA management considerations with networking Networking management is associated with network performance, particularly in regards to HSA and Continuous Operations support and in a mission-critical application environment. After the network is set up, monitor it to be sure it runs at maximum efficiency. 82 High Availability on the AS/400 System: A System Manager’s Guide The primary concerns of continuity are associated with a server’s communications pointers to other servers and their respective clients. When networks are consolidated, a common complication is the duplication of network addresses for existing devices. Each of these potential problem areas are readily identified with proper network management tools. 6.7.1 Network support and considerations with a HAV application Many AS/400 application environments take advantage of several OS/400 communication facilities. The primary concern with these facilities is not so much involved with day-to-day operation activities but is more likely to be involved with the role swap (redirection) of the database function from the production system to a backup system. Moving executing jobs from one physical AS/400 system unit to another requires that pointers managed for these systems are adjusted in a quick and efficient manner. Several communications facilities are involved: • OptiConnect • TCP/IP • SNA Each facility utilizes certain name attributes or address pointer conventions to indicate another AS/400 system’s presence in a network. Implementing continuous operations support requires that the name attributes or pointer conventions be changed to reflect that the current state of business is operating on another AS/400 system. Scenario with Tivoli network management in action This section illustrates a fictitious example of what a network management tool can do. It is noon, the peak traffic time for an international travel agency Web site. Thousands of customers worldwide access the site to book airline, hotel, and car reservations. Without warning, the Web site crashes, literally shutting down the agency’s lucrative e-business trade and closing the doors to customers. With each passing minute, tension mounts because the company stands to lose millions of dollars in revenue. Tivoli instantly sifts through enormous amounts of data for the root cause of the problem. Moments later, it pinpoints the source of the problem: a failing router. Immediately, the company’s e-business traffic is automatically rerouted through an auxiliary router. Meanwhile, system administrators recycle the router and resolve the problem. Downtime is minimized, customers remain satisfied, and a near-disaster is averted. The agency is back online and back in business. This example illustrates how a network management tool like Tivoli improves system availability. 6.8 Bus level interconnection Bus level interconnetion consists of hardware and software support between two AS/400 systems at the application level. This path provides a high performance bandwidth for client systems requiring local database access time. By using this feature of the AS/400 system, many customers have been able to use multiple systems to work as a single application image to clients. The SAP R/3 Networking and high availability 83 application, with its three-tier configuration and OptiConnect, is an example of how bus level interconnection is used for this purpose. Figure 18 depicts a three-tiered configuration showing the bus level interconnection arrangement with two bus owners (referred to as a hub). This arrangement provides redundancy for the shared bus path. Figure 18. Three-tiered configuration: OptiConnect assignments In a shared bus redundancy arrangement, it is not necessary for the application servers to be the bus owners because there is no technical reason why the database servers can’t assume that role. The only rule you must adhere to with this arrangement is that either application or database servers must perform this role as a pair. You can’t have one application server and one database server assume the bus owner role in the redundancy arrangement. If you choose a primary application and a database server for this assignment, there may come a time when both those systems must go through some sort of dedicated process or a power sequence. Application Server SAP Client Host: IP@2 SAP Client Host: IP@2 Backup Application Server Database Server Backup Database Server Client LAN/WAN IP@2 IP@3 IP@2 (inactive) Bus Owner 1 Sysnam=APPL1 Bus Owner 2 Sysnam=APPL2 84 High Availability on the AS/400 System: A System Manager’s Guide This operation can not be allowed to occur because any DDM and SQL procedures through OptiConnect (for example) cease and would, therefore, affect availability of the backup systems when continuous operations is required for the application user. When the bus owner goes to a restricted state or power sequence, all OptiConnect traffic between remote systems stops. Note: HSL OptiConnect is introduced at V5R1. However, it is not covered in this Redpaper. 6.8.1 Bus level interconnection and a high availability solution Besides providing high-speed data throughput, bus level interconnection allows OMS/400 to transport data to the backup system efficiently and quickly so that exposure to data loss is very minimal. One shared bus between an application server and two database server systems can accommodate the data requests from the application server. This is done on behalf of the clients to the primary database server while concurrently supporting the data mirroring path between the primary database server and backup database server. However, better resiliency in this example would be provided if both database servers each supported a shared bus to the application server providing a dual bus path (also referred to as shared bus redundancy). OMS/400 supports the use of OptiConnect for data replication to the backup system. When configuring your application database library in OMS/400, the specification for Opticonnect requires only the System Name from the OS/400 Network Attribute System Name. All pointers to the remote system and the apply processes are built by the configuration process. 6.8.2 TCP/IP This protocol is also supported by OMS/400. When OptiConnect cannot be used, or is not in operation, this path can also serve as a route for data replication. Primarily, this path is used by SAM/400 for verification of the primary database server’s operation and signals the redirection of the primary database system’s IP address to the backup database server system in the event of an unplanned outage. In the application environment, it is the path that the client uses for application requests from the application server and the path used between the application server and database server to access stream files prevalent in this environment (also called the access network). Networking and high availability 85 Figure 19. Three-tiered configuration: IP assignments Figure 19 shows the suggested ethernet LANs for supporting client traffic through the access network (Client LAN/WAN) path. This occurs while operations, development, and SAM/400 use the AS/400 LAN for additional traffic for testing the availability of the production system. Note: In relationship to Figure 19, the OptiConnect arrangement has been removed from the diagram to keep it uncluttered. However, you can still assume that all four AS/400 systems are connected in that manner. The LANs can be bridged for network redundancy. However, security would have to be implemented to keep client traffic out of the AS/400 LAN because of direct accessibility to the database servers. It should be noted that SAM/400 can use all of these protocols in combination to test the validity of an unplanned outage. Figure 19 shows several IP address assignments. This is designed to keep the client interface consistent with one address/host name. In the event of a role swap with an application server, the client operator does not need to be concerned with configuration tasks and does not need to use a second icon on their desktop for access to a backup system. Application Server SAP Client Host: IP@2 SAP Client Host: IP@2 Backup Application Server Database Server Backup Database Server Client LAN/WAN AS/400 LAN IP@2 IP@3 IP@2 (inactive) IP@4 IP@6 (no name) IP@5 IP@4 (inactive) IP@7 IP@8 86 High Availability on the AS/400 System: A System Manager’s Guide For the application server that must use a backup database server, it also uses the same IP address as that of the primary database server. The role swap procedures outlined in SAM/400 manages the ending and starting of different interfaces. This option facilitates the identification of each system in its new role throughout the LAN. Note that the respective backup systems have inactive IP addresses (as reflected in the interface panel of the TCP configuration menu). These addresses are started during the role swap procedure when the backup system becomes the production environment. The additional IP address for the primary database server is not generally assigned a host name which is the case with other addresses. This ensures that the address is not being used by the clients through the access network. The main purpose is for the reversed role the primary database server plays when it has stopped operating (for planned or unplanned outages) and it must now be synchronized with the backup database server. Naturally, for continuous operations, the business moves to the backup database server and the database located there begins processing all transactions while the primary database server system is being attended to. After awhile, the primary database server files become aged and must be refreshed with the current state of business before returning to operation. One method is to save the files from the backup system to the production system. However, this method would impact the user’s availability while they are still using the backup database server. OMS/400 is designed to reverse the roles of the primary and backup system to reversed target and reversed source respectively. That means the backup system can capture and send the database changes back to the production system so that it can catch up and be current with the business operation. The function that the additional IP address plays here is that its allows the system to connect to the AS/400 LAN (as shown in Figure 19) without having to use its normal interface (at this time, it is inactive) when TCP services are required. All this occurs while the business is still using the backup system to run its applications. Once the systems are equal, the user can then return them to their original roles by scheduling a planned role swap. Usually, you would select a time of day when activity is low or when work shifts are changing. © Copyright IBM Corp. 2001 87 Chapter 7. OS/400: Built-in availability functions This chapter reviews existing OS/400 functions, specifically those that enhance system availability, databases, and applications. These functions form the foundation of the AS/400 system. Note: This chapter only summarizes these functions. The System Administrator’s Companion to AS/400 Availability and Recovery, SG24-2161, provides more details on OS/400 functions designed for availability. This chapter also outlines clustering and LPAR. Introduced in OS/400 V4R4, these features provide system redundancy and expand availability and replication options. 7.1 Basic OS/400 functions Some availability functions for the AS/400 system were architected into the initial design (some were carried over from the preceding System/38). Some of these functions include: • Auxiliary storage pools (ASPs) • Journaling • Commitment control These functions were delivered when the system main storage and processing power were small in comparison to the resource needs of the applications that enabled them. Therefore, many applications were developed without these functions. As these applications grew in functionality and user base, the task of enabling the functions grew quickly. Now, newer functions are available to provide nearly 100% availability. These new functions are founded on the historic functions and are not enabled by many applications. Therefore, application developers must revisit their applications and enable these historic functions to enable the new functions. ASPs are discussed in Chapter 5, “Auxiliary storage pools (ASPs)” on page 63. Journaling and commitment control are discussed in this section. 7.1.1 Journaling Journals define which files and access paths to protect with journal management. This is referred to as journaling a file or an access path. A journal receiver contains journal entries that the system adds when events occur that are journaled, such as changes to database files. This process is shown in Figure 20 on page 88. 88 High Availability on the AS/400 System: A System Manager’s Guide Figure 20. The journaling process Journaling is designed to prevent transactions from being lost if your system ends abnormally or has to be recovered. Journal management can also assist in recovery after user errors. Journaling provides the ability to roll back from an error to a stage prior to when the error occurred if both the before and after images are journaled. To recover, the restore must be done in the proper order: 1. Restore your backup from tape. 2. Apply journaled changes. We recommend that you keep a record of which files are journaled. Use journal management to recover changes to database files that have occurred since your last complete save. A thorough discussion of journaling is beyond the scope of this chapter. For more information about journaling, refer to OS/400 Backup and Recovery, SC41-5304. 7.1.2 Journal receivers with a high availability business partner solution Business partner high availability solutions incorporate the use of journaling. A description of journal receivers with OMS/400 and ODS/400 is explained in this section. Journal receivers are used by OMS/400 and ODS/400. OMS/400 transmits the image of database records recorded there for use by the apply jobs on the target database server. ODS/400, on the other hand, uses the entries recorded in the OS/400 audit journal (QAUDJRN) to tell it what object operations must be performed on the target AS/400 system. The issue with journal receivers is that, FILEA FILEB FILEC Receiver #1 Journal Receiver #3 Receiver #2 Journal CHANGES DELETES A D D S OS/400: Built-in availability functions 89 when transactions occur in the applications, and objects are manipulated, the resulting entries placed in the receivers make them grow in size until they reach the maximum threshold of 1.9 GB. Reaching the maximum threshold must be avoided because the business applications and system functions that use the journaled objects cease operation. They will not resume operation until the filled receiver has been detached from the journal and a new receiver is created and attached. Typically, the AS/400 user can take advantage of the system’s ability to manage the growth and change of receivers while the applications are running. Or, they may elect to write their own management software. However, either solution overlooks one important aspect: synchronization with the reader function that transmits the journal entry to a backup system (OMS/400) or performs an object function (ODS/400) on behalf of the backup system. If, for any reason, the production system cannot communicate the data and object changes to the backup system, no receiver can be deleted until replication resumes and all journal entries have been processed by the high availability application. Therefore, if the AS/400 user elects one of the first two options to manage receivers, they could inadvertently delete those receivers before their contents were interrogated by high availability software for replication and synchronization purposes. Placing journal receivers in user ASPs minimizes the impact journaling may place on some application environments, especially if it has never been used for a given application environment. Since the write I/O changes from asynchronous (with regards to the program execution) to synchronous (where the program producing the write activity must actually wait for the record to be written to the journal receiver) latency is introduced and program execution may increase elapsed time. This result can be seen in batch applications that produce many significant write operations to a database being journaled. Using user ASPs only for the journal receivers allows for quick responses to the program from the DASD subsystems. The only objects being used in that ASP are the journal receivers and the most typical operation is write I/O. Therefore, the arms usually are positioned directly over the cylinders for the next contiguous space allocation when a journal receiver extent is written to the disk. Obviously, following this recommendation places more responsibility on the user for managing receivers and the DASD space they utilize. The associated journal must remain in the same ASP as the database it is recording. To implement the user ASP solution, create a new library in the user ASP and, during the next In such a case, Vision Suite recommends the use of its Journal Manager. This Vision Suite feature changes journal receivers based on a policy established by the user. It also coordinates between the reader jobs for replication and the user’s requirements to free auxiliary storage space and save receivers offline. Once it has been implemented, the user’s interface for this requirement simply observes the Work with Disk Status (WRKDSKSTS) and monitors the user auxiliary storage pool (ASP) utilization threshold. Note 90 High Availability on the AS/400 System: A System Manager’s Guide Change Journal (CHGJRN) operation, specify the new receiver to be qualified to that library. 7.2 Commitment control Commitment control is an extension of journal management. It allows you to define and process a group of changes to resources, such as database files or tables, as a logical unit of work (LUW). Logically, to the user, the commitment control group appears as a single change. To the programmer, the group appears as a single transaction. Since a single transaction may update more than one database file, when the system is in a network, a single transaction may update files on more than one system. Commitment control helps ensure that all changes within the transaction are completed for all affected files. If processing is interrupted before the transaction is completed, all changes within the transaction are removed. Without commitment control, recovering data for a complex program requires detailed application and program knowledge. Interrupted programs cannot easily be re-started. To restore the data up to the last completed transaction, typically a user program or utility, such as a Data File Utility (DFU), is required to reverse incomplete database updates. This is a manual effort, it can be tedious, and it is prone to user error. Commitment control ensures that either the entire group of individual changes occur on all participating systems, or that none of the changes occur. It can assist you with keeping your database files synchronized. 7.2.1 Save-while-active with commitment control Using the save-while-active function while commitment control processing is active requires additional consideration. When an object is updated under commitment control during the checkpoint processing phase of a save-while-active request, the system ensures that the object is saved to the media at a commitment boundary. All objects that have reached a checkpoint together are saved to the media at the same common commitment boundary. It is important to make sure that all performance considerations have been correctly implemented in this situation. Otherwise, the system may never be able to reach a commitment boundary. It may not be able to obtain a checkpoint image of the objects to be saved. Procedures need to be specified to ensure that all of the objects reach a checkpoint together and all of the objects are saved in a consistent state in relationship to each other. If the checkpoint versions of the objects are not at an application boundary, user-written recovery procedures may still be necessary to bring the objects to an application boundary. Refer to OS/400 Backup and Recovery, SC41-5304 for details on coding for commitment control and save-while-active. 7.3 System Managed Access Path Protection (SMAPP) An access path describes the order in which records in a database files are processed. A file can have multiple access paths if different programs need to see the records in different sequences. If your system ends abnormally when access OS/400: Built-in availability functions 91 paths are in use, the system may have to rebuild the access paths before you can use the files again. This is a time-consuming process. To perform an IPL on a large and busy AS/400 system that has ended can take many hours. You can use journal management to record changes to access paths. This greatly reduces the amount of time it takes the system to perform an IPL after it ends abnormally. Access path protection provides the following benefits: • Avoids rebuilding access paths after most abnormal system ends • Manages the required environment and makes adjustments as the system changes if SMAPP is active • Successful even if main storage cannot be copied to storage Unit 1 of the system ASP during an abnormal system end • Generally faster and more dependable than forcing access paths to auxiliary storage for the files (with the FRCACCPTH parameter) The disadvantages of access path protection include: • Increases auxiliary storage requirements • May have an impact on performance because of an increase in the activity of the disks and processing unit • Requires file and application knowledge for recovery. There is a small additional processor overhead if *RMVINTENT is specified for the RCVSIZOPT parameter for user-created journals. However, the increase in storage requirements for access path journaling is reduced by using *RMVINTENT. • Normally requires a significant increase in the storage requirements for journaling files. The increase with SMAPP is less than when access paths are explicitly journaled. Two methods of access-path protection are available: • System management access-path protection (SMAPP) • Explicit journaling of access paths An access path (view) describes the order in which records in a database file are processed. A file can have multiple access paths if different programs need to see the records in different sequences. If your system ends abnormally when access paths are in use, the system may have to rebuild the access paths before you can use the files again. This can be a time-consuming process, since an IPL on a large busy server that had ended abnormally may take many hours. Two methods of access-path protection are available: • OS/400 System Managed Access Path Protection (SMAPP) • Explicit journaling of access paths 7.4 Journal management You can use journal management to recover the changes to database files or other objects that have occurred since your last complete save. Use a journal to define what files and access paths you want to protect with journal management. This is often referred to as journaling a file or an access path. A journal receiver 92 High Availability on the AS/400 System: A System Manager’s Guide contains the entries (called journal entries) that the system adds when events occur that are journaled, such as changes to database files, changes to other journaled objects, or security-relevant events. Use the remote journal function to set up journals and journal receivers on a remote AS/400 system. These journals and journal receivers are associated with journals and journal receivers on the source system. The remote journal function allows you to replicate journal entries from the source system to the remote system. The main purpose of journal management is to assist in recovery. You can also use the information that is stored in journal receivers for other purposes, such as: • An audit trail of activity that occurs for database files or other objects on the system • Assistance in testing application programs. You can use journal entries to see the changes that were made by a particular program. Figure 21 shows the steps involved for journaling. Figure 21. The Journaling Process 7.4.1 Journal management: Benefits Benefits of journal management can include: • A reduction in the frequency and amount of data saved • Improved ability and speed of recovery from a known point to the failure point • Provides file synchronization if the system ends abnormally 7.4.2 Journal management: Costs and limitations Disadvantages of journal management include: • An increase in auxiliary storage requirements • Can have an impact on performance because of an increase in the activity of disks and the processing unit MEMORY 1 6 2 4 Journal Receiver Synch Point 6 3 5 ASP1 ASP2 Before image After image OS/400: Built-in availability functions 93 • Requires file and application knowledge for recovery Refer to the OS/400 Backup and Recovery, SC41-5304 for further information. 7.5 Logical Partition (LPAR) support In an n-way symmetric multi-processing iSeries or AS/400e server, logical partitions allow you to run multiple independent OS/400 instances or partitions, each with their own processors, memory, and disks. Note: With OS/400 V5R1, a single processor can be sliced for sharing across partitions. You can run a cluster environment on a single system image. With logical partitioning, you can address multiple system requirements in a single machine to achieve server consolidation, business unit consolidation, and mixed production and test environments. Figure 22 shows an LPAR configuration with resources shared across partitions. Figure 22. Example LPAR configuration Each logical partition represents a division of resources in your AS/400e system. Each partition is logical because the division of resources is virtual rather than physical. The primary resources in your system are its processors, memory (main storage), I/O buses, and IOPs. An LPAR solution does not offer a true failover capability for all partitions. If the primary partition fails, all other partitions also fail. If there are multiple secondary partitions backing each other up, they have the capability to failover between partitions. These secondary partitions are nodes and are a cluster solution. However, they are not a separate server implementation. LPAR cannot provide the same level of availability as two or more node cluster solutions. 830 4-way 2 GB memory A 2-way 1 GB B 2-way 1 GB 94 High Availability on the AS/400 System: A System Manager’s Guide See 4.12, “LPAR hardware perspective” on page 58, for discussion of LPAR from a hardware perspective. 7.6 Cluster support and OS/400 The ultimate availability solution consists of clustered systems. OS/400 V4R4 introduced clustering support. This support provides a common architected interface for application developers, iSeries software providers, and high availability business partners to use in-building high availability solutions for the iSeries and AS/400 server. The architecture is built around a framework and is the foundation for building continuous availability solutions for both the iSeries and AS/400 servers. Clustering provides: • Tools to create and manage clusters, the ability to detect a failure within a cluster, and switchover and failover mechanisms to move work between cluster nodes for planned or unplanned outages • A common method for setting up object replication for nodes within a cluster (this includes the data and program objects necessary to run applications that are cluster-enabled) • Mechanisms to automatically switch applications and users from a primary to a backup node within a cluster for planned or unplanned outages Clustering involves a set of system APIs, system services, and exit programs as shown in Figure 23. Data replication services and the cluster management interface are provided by IBM HABPs. How clustering works, and planning for clusters, is described in AS/400 Clusters, A Guide to Achieving Higher Availability, SG24-5194. Cluster Management Provided by HABPs Data Resiliency Replication technology provided by HABPs Cluster Resource Services Base OS/400 cluster functions from IBM APIs Application Resiliency High availability cluster enabled applications Figure 23. Cluster overview © Copyright IBM Corp. 2001 95 Chapter 8. Performance Performance is an availability issue from several view points. First and foremost, to the end user, poor performance can be significantly more frustrating than a system outage. Poor performance can even result in lost sales. An example would be a customer calling to check the availability or price of an item. If the response of the service representative (or a Web-enabled application) is too long, the customer looks elsewhere. Poor performance can be caused by any number of factors, which can be any computing component between the end-user incoming request and the delivery of the response. The timing is affected by the performance of the communication links, routers, hubs, disk arms (service time), memory, and the CPU. Service agreements can hold both parties accountable, with the parties being the service provider (you the business), and the receiver or requestor of the service (the customer). Do not delay the planning for performance for your high availability environment until after implementing the system and applications. Plan for it prior to installation. Set your expectations and guidelines accordingly. It is easy to suggest that, by implementing a backup machine, the spare capacity on the backup can give extra cycles to a sluggish application. This is very rarely the case. Investigating the proposed performance of the new environment should reap dividends during the implementation phase. This section discusses the implication of performance on the various levels of availability. Note: Performance ratings of save and restore hardware and software options can be found in the AS/400 Performance Capabilities Reference. This can be accessed online at: http://www.ibm.com/eserver/iseries/library 8.1 Foundations for good performance This section briefly describes the fundamental elements of good performance on the AS/400e servers. 8.1.1 Symmetric multiprocessing (SMP) In the AS/400e world, SMP has multiple meanings. First and foremost, it is a hardware deployment capability. iSeries processors and some AS/400e processors can be purchased in 2-way, 4-way, 8-way, 12-way, 18-way, and 24-way configurations. In the industry, all the n-way processor configurations are referred to as SMP processor systems. Due to the architecture of the AS/400 server and OS/400, applications and utilities are able to take advantage of the SMP models without overt programming efforts. However, it is possible to obtain even higher levels of throughput by redesigning batch processes to take advantage of multiple processors. Another use of the term SMP in the AS/400e world refers to a feature of the operating system called DB2 Symmetric Multiprocessing for AS/400. This feature enables a dynamic build of access paths or views for queries (including 96 High Availability on the AS/400 System: A System Manager’s Guide OPNQRYF and the query manager) utilizing parallel I/O and parallel processing across all available processors. Plan carefully for the use of this feature because it can significantly increase overall CPU utilization in addition to increased physical I/O operations. To learn more about the DB2 Symmetric Multiprocessing for AS/400 feature, refer to the iSeries Handbook, GA19-5486 and the Performance Capabilities Reference, which can be accessed online at: http://publib.boulder.ibm.com/pubs/pdfs/as400/V4R5PDF/AS4PPCP3.PDF 8.1.2 Interactive jobs Just as you would not implement a major application change without analysis of the potential performance impact, you need to determine the potential impact of implementing high availability on your interactive workload. Ask the following questions: • What effect will the proposed implementation have on interactive performance? • If journaling is to be activated, what will its impact be? • Have you decided to rewrite your applications to ensure data integrity by utilizing commitment control? What will this performance impact be? The answers to these questions must be analyzed and fed back to the business plan and incorporated into your service level agreements. 8.1.3 Batch jobs Batch jobs are another key area for high availability. This is one place that backup machines may have a positive effect on the performance of the primary system. You may be able to redirect work from your primary system to the backup system. This is most feasible for read only work. Other types of batch jobs could be very difficult to alter to take advantage of a second system and a major re-write may be necessary. If you choose to utilize your backup system for read only batch work, make sure that you understand the impact of these jobs on the high availability business partner apply processes. If the work you run on the backup system interferes in any way with these apply processes, you may reduce your ability to switchover or failover in a timely manner. You need to consider the impact of activating journaling on your batch run times and explore the possibility of incorporating commitment control to improve run time in a journaled environment. This is discussed in more detail in 8.2.3, “Application considerations and techniques of journaling” on page 99. 8.1.4 Database Consider the types of database networks utilized by your application when understanding performance. Do you have multiple database networks, a single database network, multiple database networks across multiple servers, or a single database network across multiple servers? Performance 97 As stated, the major performance impact for a database is the start of journaling. Each database operation (except read operations) involves journal management. This adds a physical I/O and code path to each operation. 8.2 Journaling: Adaptive bundling Journaling’s guarantee of recover ability is implemented with extra I/O and CPU cycles. Since V4R2 of OS/400, the technique of adaptive bundling has been used to reduce the impact of these extra I/O operations. This means that journal writes are often grouped together for multiple jobs, in addition to commit cycles. Refer to Figure 24 for a simple illustration. Figure 24. Adaptive bundling Unless you have taken specific actions, a single batch job can minimally take advantage of adaptive bundling. A high penalty is paid in extra I/O with every record update performed. It is normal to see an increase in both I/O and CPU utilization after turning on journaling. Even on a well-tuned system, the CPU utilization increase can be as high as 30%. This increase in utilization increases response time. An even higher degradation is common for single threaded batch jobs that do not run commitment control. Journaling increases the number of asynchronous writes. The effect of these asynchronous writes is shown on the Transition Report of Performance Tools. Modules QDBPUT, QDBGETKY, and QDBGETSQ show evidence of this asynchronous I/O request. Joe1 Job 1 Job 2 Average Bundle Size Comm Line Memory 98 High Availability on the AS/400 System: A System Manager’s Guide To reduce the impact of journaling: • Group inserts under a commit cycle • Group inserts • Split a batch job into several jobs and run them in parallel 8.2.1 Setting up the optimal hardware environment for journaling Building a sound hardware environment for your journaled environment can minimize the impact of journaling. Start by creating a user auxiliary storage pool (ASP) that utilizes mirrored protection. Depending on the amount of storage required for your journal receivers, and the release of OS/400 you are running, you can allocate between 6 and 200 total arms to this user ASP. These disk arms should have dedicated IOPs with at least .5 MB of write cache per arm. Regardless, these arms should be the fastest available on your system. Note: The total stated number of arms are not “operational” numbers after turning on mirroring. If you are running a release of OS/400 prior to V4R5, the maximum number of disk arms the system can efficiently use for parallel I/O operations for journaling is 30. In V4R5, if you utilize the *MAXOPT parameter on the receiver size (RCVSIZOPT) keyword, the number of used disk arms increases to 200. 8.2.2 Setting up your journals and journal receivers Refer to Section 6.2 “Planning and Setting Up Journaling” in the Backup and Recovery Guide, SC41-5304, for detailed information. Also keep the following points in mind: • The journal and journal receiver objects should not be in the same library as the files they are to journal. • The journal object (*JRN) should not reside in the user ASP of the journal receiver (*JRNRCV) objects. • Isolate journal receiver writes from system managed access path protection (SMAPP) writes by specifying RCVSIZOPT(*RMVINTENT) on the CRTJRN and CHGJRN commands. In addition to isolating the SMAPP I/O operations to arms dedicated for that activity, your journal receivers will not fill as quickly. The system uses two-thirds of allocated ASP arms for JRNRCV objects and the remaining one-third for SMAPP entries. • Suppress open and close journal entries by utilizing OMTJRNE(*OPNCLO) on the STRJRNPF command. • Use system managed receivers by specifying MNGRCV(*SYSTEM) on the CRTJRN and CHGJRN commands to enable better system performance during the change of journal receivers. You can ensure that your business partner package maintains control over the actual changing of journal receivers by specifying a threshold on your journal receivers that is larger than the size specified in the partner package. The MNGRCV(*SYSTEM) requires the parameter THRESHOLD be specified on the CRTJRNRCV command. Performance 99 8.2.2.1 Determining the number of journals and receivers Generally speaking, you always have multiple journal and journal receivers. Some strategies for determining the number of journals and journal receivers you have include: • By application: To simplify recovery, files that are used together in the same application should be assigned to the same journal. In particular, all the physical files underlying a logical file should be assigned to the same journal. Starting in V3R1, all files opened under the same commitment definition within a job do not need to be journaled to the same journal. If your major applications have completely separate files and backup schedules, separate journals for the applications may simplify operating procedures and recovery. • By security: If the security of certain files requires that you exclude their backup and recovery procedures from the procedures for other files, assign them to a separate journal, if possible. • By function: If you journal different files for different reasons, such as recovery, auditing, or transferring transactions to another system, you may want to separate these functions into separate journals. Remember, a physical file can be assigned to only one journal. If you have user ASPs with libraries (known as a library user ASP), all files assigned to a journal must be in the same user ASP as the journal. The journal receiver may be in a different ASP. If you place a journal in a user ASP without libraries (non-library user ASP), files being journaled must be in the system ASP. The journal receiver may be in either the system ASP or a non-library user ASP with the journal. See the section titled “Should Your Journal Receivers Be in a User ASP?” in the Backup and Recovery Guide, SC41-5304, for more information about the types of ASPs and restrictions. Remember to consult the Backup and Recovery Guide, SC41-5304 for restore or recovery considerations when setting up your environment. Even though you set up this environment to minimize the need to ever fully restore your system, you may have to partially restore within your own environment or fully restore if you take advantage of the Rochester Customer Benchmark Center or a disaster/recovery center. 8.2.3 Application considerations and techniques of journaling Database options that have an impact on journaling and system performance are: • The force-write ratio (FRCRATIO) parameter for physical files that are journaled. This allows the system to manage when to write records for the physical file to disk because, in effect, the journal receiver has a force-write ratio of one. • Record blocking when a program processes a journaled file sequentially (SEQONLY(*YES)). When you add or insert records to the file, the records are not written to the journal receiver until the block is filled. You can specify record blocking with the Override with Database File (OVRDBF) command or in some high-level language programs. This is a standard and good performance practice that significantly helps the performance of journaling too. 100 High Availability on the AS/400 System: A System Manager’s Guide • Use OMTJRNE(*OPNCLO)) to reduce the number of journal entries. If you choose to omit open journal entries and close journal entries, note the following considerations: – You cannot use the journal to audit who has accessed the file for read only purposes. – You cannot apply or remove journal changes to open boundaries and close boundaries using the TOJOBO and TOJOBC parameters. – Another way to reduce the number of journal entries for the file is to use shared open data paths. This is generally a good performance recommendation regardless of journaling activity. • Utilize the Batch Journal Caching PRPQ. This offering: – Forces journal entries to be cached in memory for most efficient disk writes – Is designed to reduce journaling's impact on batch jobs – Is selectively enabled Additional information about the Batch Journal Caching PRPQ can be found in Appendix C, “Batch Journal Caching for AS/400 boosts performance” on page 153. 8.3 Estimating the impact of journaling To understand the impact of journaling on the capacity of a system, consider the processes involved. Additional overhead is involved for disk and CPU activity and additional storage is required in preparation for potential recovery. 8.3.1 Additional disk activity Consider that each row updated, added, or deleted has either one or two journal entries. While this is an asynchronous I/O operation, your disk arm response time can increase. This causes degradation to your production workload. Under certain circumstances, these asynchronous I/O operations become synchronous I/O operations and cause your application to wait for them to complete before they can continue. 8.3.2 Additional CPU Each update, add, or delete operation utilizes additional CPU seconds to complete. The ratio of CPU per logical I/O is a key factor in determining the additional CPU required for journaling. 8.3.3 Size of your journal auxiliary storage pool (ASP) Depending on how accurate you want your estimate of space requirements to be, follow one of the two methodologies explained in this section to estimate your space requirements. 8.3.3.1 Using weighted average record length Perform the following steps to use a weighted average record length: 1. Determine the average number of entries per time period (number of days or hours) worth of receiver entries you want or need available. Performance 101 To take advantage of the journal’s ability to protect your data and to provide an audit trail, you may want to keep more than a few hours worth of receiver entries for transmission to a secondary system. Once the data is written to the disk, the only expense involved beyond the disk space consumed is the cycles required to retrieve the entries for further analysis. 2. Determine the weighted average record length. Add 155 to this weighted average record length. 3. Multiply the results from step 2 by the results from step 1 and divide by 1,024 to determine the KB. Or, divide by 1,048,576 to determine the MB required. 8.3.3.2 Using actual changes logged by file management The file description contains information useful for calculating storage usage. To utilize this information: 1. Execute a CL program to capture FD information. 2. Execute an RPG program to translate the date to a table for further calculations. 3. Rerun CL in as you did in step 1. 4. Execute an RPG program to add a second set of FD information to a table. 5. Calculate requirements based on information in the file. 8.4 Switchover and failover Highly available systems involve clustering. When a production system is switched over to the backup system, either for a planned or an unplanned outage, the time required to make this switchover is critical. To reduce the time involved in this switchover process, consider the networks and performance as explained in the following section. Networks and performance The performance of a communications network provides acceptable (or non-acceptable) response time for the end users. Response time provides the perception for the end user of the reliability and availability of the system. In general, to improve performance: • Avoid multiple layers of communications. • Avoid communication servers (such as Microsoft SNA server, IBM Communication Server, or NetWare SNA server). • Use Client Access/400 or IBM eNetwork Personal Communications for AS/400 where possible. • Use a native protocol instead of ANYNET. The Best/1 licensed program, a component of 5769-PT1, can capture information on a communication line and predict utilization and response times. 102 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 103 Part 3. AS/400 high availability solutions Combining the features of OS/400 system and network hardware with AS/400e high availability software produced by IBM business partners is an important method for improving a single systems availability. Part III discusses these components and it also explores the considerations for writing applications to support a highly available environment. 104 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 105 Chapter 9. High availability solutions from IBM The foundation of all high availability functions is OS/400. The AS/400 development and manufacturing teams continue to improve the AS/400 system for feature, function, and reliability options with each release of OS/400. In particular, the OS/400 remote journal feature enhances high availability solutions by enabling functions below the machine interface (MI) level. Note: Prior to OS/400 V4R2, remote journal functions were coded into application programs. APIs and commands are available in V4R2 and V4R3 respectively. Cluster solutions connect multiple AS/400 systems together using various interconnect fabrics, including high-speed optical fiber. Clustered AS/400 systems offer a solution that can deliver up to 99.99% system availability. For planned or unplanned outages, clustering and system mirroring offer the most effective solution. IBM business partners that provide high systems availability tools complement IBM availability tools with replication, clustering, and system mirroring solutions. The IBM (application) contribution to AS/400 High Availability Solutions includes the IBM DataPropagator Relational Capture and Apply for AS/400 product. From this point on, this product is referred to as DataPropagator/400. This chapter describes the IBM package and its benefits as a minimal High Availability Solution. 9.1 IBM DataPropogator/400 The IBM solution that fulfills some of the requirements of an HSA solution is DataPropagator/400. Note: The DataPropagator/400 product was not designed as a high availability solution. In some cases, it can cover the needs for data availability, as discussed in this chapter. DataPropagator/400 is a state-of-the-art data replication tool. Data replication is necessary when: • Supplying consistent real time reference information across an enterprise • Bringing real time information closer to the business units that require access to insulate users from failures elsewhere on the network • Reducing network traffic or the reliance on a central system • On-demand access disrupts production or response • Migrating systems and designing a transition plan to move the data while keeping the systems in sync • Deploying a data warehouse with an automated movement of data • Current disaster plan strategies do not adequately account for site-failure recovery DataPropagator/400 is not a total High Availability Solution because it only replicates databases. It does not replicate all of the objects that must be mirrored for a true High Availability Solution in a dynamic environment. 106 High Availability on the AS/400 System: A System Manager’s Guide However, consider DataPropagator/400 for availability functions in a stable environment where the following criteria can be met: • Only the database changes during normal production on the AS/400 system. • Such objects as user profiles, authorities, and other non-database objects are saved regularly on the source system and restored on the target system when changed. In other words, in a stable environment, where only the database changes, replicating the database to a backup system and transferring users manually to this system may be a sufficient availability and recovery plan (Figure 25). Figure 25. Usage of DataPropagator/400 9.1.1 DataPropagator/400 description IBM DataPropagator/400 automatically copies data within and between IBM DB2 platforms to make data available on the system when it is needed. The IBM DataJoiner product can be used in addition to the DataPropagator/400 product to provide replication to several non-IBM databases. Immediate access to current and consistent data reduces the time required for analysis and decision making. DataPropagator/400 allows the user to update copied data, maintain historical change information, and control copy impact on system resources. Copying may involve transferring the entire contents of a user table (a full refresh) or only the changes made since the last copy (an update). The user can also copy a subset of a table by selecting the columns they want to copy. Making copies of database data (snapshots) solves the problem of remote data access and availability. Copied data requires varying levels of synchronization with production data, depending on how the data is used. Copying data may even be desired within the same database. If excessive contention occurs for data access in the master database, copying the data offloads some of the burden from the master database. Why replication? Reengineer business processes Improve decision making Speed application deployment Increase online throughput Improve system availability Support audit requirements Support data warehousing Support mobility AS/400 OS/2 Windows NT AIX HP-UX Solaris System DB2 Oracle Sybase Informix Microsoft SQL server Ordering/billing IMS DB2 DB2 Manufacturing Mobile MVS VM/VSE High availability solutions from IBM 107 By copying data, users can get information without impacting their production applications. It also removes any dependency on the performance of remote data access and the availability of communication links. DataPropagator/400 highlights include: • An automatic copy of databases • Full support for SQL (enabling summaries, derived data, and subsetted copies) • During a system or network outage, the product restarts automatically from the point where it stopped. If this is not possible, a complete refresh of the copies can be performed if allowed by the administrators. Also, for example, if one of the components fails, the product can determine that there is a break in sequence of the data being copied. In this case, DataPropagator/400 restarts the copy from scratch. • Open architecture to enable new applications • DataPropagator/400 commands that support AS/400 system definitions • Full use of remote journal support in V4R2 9.1.2 DataPropagator/400 configuration In the database network, the user needs to assign one or more roles to their systems when configuring the DataPropagator/400 environment. These roles include: • Control server: This system contains all of the information on the registered tables, the snapshot definitions (the kind of data you want to copy and how to copy it), the ownership of the copies, and the captures in reference to registrars and subscribers. • Data server: This contains the source data tables. • Copy server: This is the target system. Depending on the structure of the company, the platforms involved, and customer preferences, a system in the network can play one or more of these three roles. DataPropagator/400, for example, works powerfully on a single AS/400 system, which, at the same time, serves as the Control, Copy, and Data Server. 9.1.3 Data replication process With DataPropagator/400, there are two steps to the data replication process: • The Capture process: This is for reading the data. • The Apply process: This is for applying updated data Figure 26 and Figure 27 on page 108 illustrate these processes. 108 High Availability on the AS/400 System: A System Manager’s Guide Figure 26. The DataPropagator/400 Capture process Figure 27. The DataPropagator/400 Apply process Features included in the Capture and Apply processes include: • Support for the remote journal function to offload the source CPU • Automated deletion of journal receivers • Replication over a native TCP/IP-based network • Multi-vendor replication with DataJoiner (replication to and from Oracle, Sybase, Informix, and Microsoft SQL Server databases) • Integration with the Lotus Notes databases 9.1.4 OptiConnect and DataPropagator/400 DataPropagator/400 is based on a distributed relational database architecture (DRDA) and is independent of any communications protocol. Therefore, it uses OptiConnect and any other media without additional configuration. Administration Target BASE Control COPY COPY COPY APPLY Unit of Work Change Data Control CAPTURE Journal Base tables Column selection After image or before and after image Operational System Administration BASE Unit of Work Change Data Control CAPTURE Journal Operational System Control Target PL/User Copy HISTORY STAGING APPLY AGGREGATE Base and Copy tables Internal and Repetition Column and Row selection Computed Columns Aggregations Append or Replace High availability solutions from IBM 109 9.1.5 Remote journals and DataPropagator/400 DataPropagator/400 takes advantage of an operating system’s remote journal function. With remote journals, the capture process is run at the remote journal location to offload the capture process overhead from the production system. The Apply process does not need to connect to the production system for differential refresh because the DataPropagator/400 staging tables reside locally rather than on the production system. In addition, because the DataPropagator/400 product is installed only on the system that is journaled remotely, the production system no longer requires a copy of DataPropagator/400. 9.1.6 DataPropagator/400 implementation DataPropagator/400 is most beneficial for replicating data to update remote databases. One real-life example of this is a customer in Denmark who had a central AS/400 system and stored all production data, pricing information, and a customer database on it. From this central machine, data was distributed to sales offices in Austria, Germany, Norway, and Holland (each of which operated either small AS/400 systems or OS/2 PCs). Each sales office received a subset of the data that was relevant to their particular office. See 3.1, “A high availability customer: Scenario 1” on page 25, for a description of this customer scenario. 9.1.7 More information about DataPropagator/400 For more information about IBM DataPropagator/400 solutions, refer to DataPropagator Relational Guide, SC26-3399, and DataPropagator Relational Capture and Apply/400, SC41-5346. Also, visit the IBM internet Web site at: http://www.software.ibm.com/data/dbtools/datarepl.html 110 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 111 Chapter 10. High availability business partner solutions High availability middleware is the name given to the group of applications that provide replication and management between AS/400e systems and cluster management middleware. IBM business partners that provide high system availability tools continue to complement IBM availability offerings of clustering and system mirroring solutions. Combining clusters of AS/400 systems with software from AS/400 high-availability business partners improves the availability of a single AS/400 system by replicating business data to one or more AS/400 systems. This combination can provide a disaster recovery solution. This chapter explores the applications provided by IBM High Availability Business Partners (HABPs), DataMirror, Lakeview Technology, and Vision Solutions. This chapter also discusses the options these companies provide to support a highly or continuously available solution. Note: For customers requiring better than 99.9% system availability, AS/400 clusters with high-availability solutions from an IBM Business Partner are recommended. 10.1 DataMirror Corporation DataMirror Corporation is an IBM business partner with products that address a number of issues, such as data warehousing, data and workload distribution, and high availability. DataMirror products run on IBM and non-IBM platforms. The DataMirror High Availability Suite uses high performance replication to ensure reliable and secure delivery of data to backup sites. In the event of planned or unplanned outages, the suite ensures data integrity and continuous business operations. To avoid transmission of redundant data, only changes to the data are sent to the backup system. This allows resources to be more available for production work. After an outage is resolved, systems can be resynchronized while they are active. The DataMirror High Availability Suite contains three components: • DataMirror High Availability (HA) Data • ObjectMirror • SwitchOver System Figure 28 on page 112 illustrates the components of the DataMirror High Availability Suite. The corresponding DataMirror HA Suite Source Menu is shown in Figure 29 on page 112. Some of the software vendors mentioned in this chapter may have products with functions that are not directly related to the high availability issues on the AS/400 system. To learn more about these products, visit these vendors on the World Wide Web. You can locate their URL address at the end of the section that describes their solution. Note 112 High Availability on the AS/400 System: A System Manager’s Guide Figure 28. DataMirror High Availability Suite Figure 29. DataMirror HA Suite Source Menu 10.1.1 DataMirror HA Data DataMirror HA Data mirrors data between AS/400 production systems and failover machines for backup, recovery, high systems availability, and clustering. Object Changes Database files and or AS/400 objects User Application Role Swap Database files and or AS/400 objects Role Swap Meta Data Object Capture User Application Filtering Communications Manager Object Capture Data Transformation Communications Manager Transformation Server Object Mirror SwitchOver System Source Target High availability business partner solutions 113 A user can replicate entire databases or individual files on a predetermined schedule in real time or on a net change basis. They can refresh the backup machine nightly or weekly as required. They can also use DataMirror HA Data to replicate changes to databases in real time so that up-to-the-minute data is available during a scheduled downtime or disaster. DataMirror HA Data software is a no-programming-required solution. Users simply install the software, select which data to replicate to the backup system, determine a data replication method (scheduled refresh or real time), and begin replication. At the end of a system failure, fault tolerant resynchronization can occur without taking systems offline. DataMirror HA Data supports various high availability options, including workload balancing, 7 x 24 hour operations availability, and critical data backup. Combined with Data Mirror ObjectMirror software and SwitchOver System, a full spectrum of high availability options is possible. 10.1.2 ObjectMirror ObjectMirror enables critical application and full system redundancy to ensure access to both critical data and the applications that generate and provide data use. ObjectMirror supports real time object mirroring from a source AS/400 system to one or more target systems. It provides continuous mirroring as well as an on-demand full refresh of AS/400 objects that are grouped by choice of replication frequency and priority. ObjectMirror features include: • Grouping by choice to mirror like-type objects based on frequency or priority • Continuous real time mirroring of AS/400 objects • Intelligent replication for guaranteed delivery to backup systems even during a system or communication failure • Object refresh on a full-refresh basis as needed • Fast and easy setup, including an automatic registration of objects • Ability to send an object or group of objects immediately without going through product setup routines 10.1.3 SwitchOver System The SwitchOver System operates on both the primary and backup AS/400 systems to monitor communications or system failures. During a failure, the SwitchOver System initiates a logical role switch of the primary and backup AS/400 systems either immediately or on a delayed basis. A Decision Control Matrix in the SwitchOver System allows multiple line monitoring, detailed message logging, automated notification, and user-exit processing at various points during the switching process. An A/B switch (shown in Figure 30 on page 114) allows the user to automatically switch users and hardware peripherals, such as twinax terminals, printers, and remote controllers. 114 High Availability on the AS/400 System: A System Manager’s Guide Figure 30. DataMirror SwitchOver system 10.1.4 OptiConnect and DataMirror The DataMirror High Availability Suite supports SNA running over OptiConnect between multiple AS/400 systems. After OptiConnect is installed on both source and target AS/400 systems, the user needs to create controllers and device descriptions. After controllers and devices are varied on, the user simply specifies the device name and remote location used in the DataMirror HA Data or Object Mirror target definition. Files or objects that are specified can then be defined, assigned to the target system, and replicated. 10.1.5 Remote journals and DataMirror The DataMirror High Availability (HA) Suite is capable of using the IBM remote journal function in OS/400. The architecture of the DataMirror HA Suite allows the location of the journal receivers to be independent from where the production (source) or failover (target) databases reside. Therefore, journal receivers can be located on the same AS/400 system as the failover database. This allows the use of DataMirror intra-system replication to support remote journals. Customers can invoke remote journal support in new implementations. Additionally, the existing setup can be modified if remote journal support was not originally planned. 10.1.6 More information about DataMirror To learn more about DataMirror products, visit DataMirror on the Internet at: http://www.datamirror.com Remote Systems Mainframe ??? ??????????? Async, BSC, SDLC, X.25, ISDN, Frame Relay using V24, V35, X21 X25, T1/T2 5394 5494 Twinax attached terminals, printers 5250 PC cards LCW .... Twinax attached terminals, printers 5250 PC cards SWITCH A B SwitchOver System 1234 1234 DATA DATA OS/400 OS/400 Token Ring Ethernet FDDI 5394 PC Routers or Bridges Remote LAN PC High availability business partner solutions 115 10.2 Lakeview Technology solutions Lakeview Technology, an IBM business partner, offers a number of products to use in an AS/400 high availability environment. Their high availability suite contains five components: • MIMIX/400 • MIMIX/Object • MIMIX/Switch • MIMIX/Monitor • MIMIX/Promoter The following sections highlight each of these components. 10.2.1 MIMIX/400 MIMIX/400 is the lead module in the Lakeview Technology MIMIX high availability management software suite for the IBM AS/400 system. It creates and maintains one or more exact copies of a DB2/400 database by replicating application transactions as they occur. The AS/400 system pushes the transaction data to one or more companion AS/400 systems. By doing this, a viable system with up-to-date information is always available when planned maintenance or unplanned disasters bring down the primary system. MIMIX/400 also supports intra-system database replication. Figure 31 shows the basic principles of the MIMIX/400. Figure 31. The basics principles of Mimix/400 The key functions of MIMIX/400 are: • Send: This function scrapes the source system journal and sends the data to one or more target systems. This function offers the following characteristics: – Is written in ILE/C for high performance – Uses CPI-C to provide a low-level generic interface that keeps CPU overhead to a minimum Source AS/400 system Target AS/400 system CPI-C Com munications CPI-C Com munications Applications Journaling is the key A Data B Data Journal receiver 116 High Availability on the AS/400 System: A System Manager’s Guide – Supports filtering to eliminate files from MIMIX copies and to optimize communication throughput, auxiliary storage usage, and performance on the target system – Generates performance stamps, which continue throughout the replication cycle, for a historic view of performance bottlenecks • Receive: This function collects transactions from the Send function. The Receive function stores and manages the transactions on the backup system for processing using the Apply function. The Receive function offers these features: – A temporary staging step where transactions are pushed off the sending system as soon as possible to eliminate the load from the source system CPU – Fast performance because it is written in ILE/C – Variable length log space entries to make the most of available CPU and DASD resources – Filtering capabilities for greater capacity exists on the target system to boost performance on the Send side • Apply: This function reads all transactions and updates the duplicate databases on the target systems accordingly. The Apply function supports the following features: – Offers a file or member control feature to manage file name aliases and define files to the member level (files are locked during the Apply process for maximum configuration flexibility and to prevent files from being unsynchronized) – Opens up to 9,999 files simultaneously within a MIMIX Apply session – Supports record lengths up to 32 K in size – Manages DB2/400 commitment control boundaries during the Apply and Switch processes – Uses a log process to protect against data loss during a source system outage – Includes a graphical status report of source and target system activity. It displays the report in an easy-to-read format for operators to quickly identify MIMIX operating environment issues. • Synchronize: This function verifies that the target system has recorded exact copies of the source system data. The Synchronize function supports the following features: – Offers keyed synchronization to keep target and source databases in synchronization with the unique “key” field in each record – Provides support tools to analyze and correct file synchronization errors by record • Switch: This function prepares target systems for access by users during a source system outage. The Switch function performs the following tasks: – Defines systems, journals, fields, and data areas. The Send, Receive, and Apply sessions are linked into a logical unit called a data group. High availability business partner solutions 117 – Uses a data group manager to reverse the direction of all MIMIX/400 transmissions during an outage – Offers a journal analysis tool to identify transactions that may be incomplete after an outage 10.2.2 MIMIX/Object The MIMIX/Object component creates and maintains duplicate images of critical AS/400 objects. Each time a user profile, device description, application program, data area, data queue, spooled file, PC file, image file, or other critical object is added, changed, moved, renamed, or deleted on an AS/400 production system, MIMIX/Object duplicates the operation on one or more backup systems. The key elements of MIMIX/Object include: • Audit Journal Reader: This element scrapes the source system security audit journal for object operations and passes them to the distribution reader. The features of the Audit Journal Reader include: – Management of objects within a library by object and type; document library objects (DLOs) by folder path, document name, and owner; and integrated file system objects by directory path and object name – Management of spooled-file queues based on their delivery destination – Explicit, generic (by name), and comprehensive (all) identification of library, object, DLO, and integrated file system names – An “include” and “exclude” flag for added naming precision – Integrated file system control to accommodate hierarchical directories, support long names, and provide additional support for byte stream files • Distribution Reader: This element sends, receives, confirms, retries, and logs objects to history and message queues. The features of the Distribution Reader include: – Multi-thread asynchronous job support to efficiently handle high volumes of object operations – A load-leveling journal monitor to automatically detect a large back log for greater parallelism in handling requests – A history log to monitor successful distribution requests; offering reports by user, job, and date; and provides effective use of time for improving security control and management analysis – A failed request queue to provide error information, and for deleting and retrying options for ongoing object integrity and easy object resolution – An automatic retry feature to resubmit requests when objects are in use by another application until the object becomes available – Automatic management of journal receivers, history logs, and transaction logs to minimize the use of auxiliary storage • Send Network Object: This element relies on the Audit Journal Reader, which interactively saves and restores any object from one system to another. It offers simplified generic distribution of objects manually or automatically through batch processing. 118 High Availability on the AS/400 System: A System Manager’s Guide 10.2.3 MIMIX/Switch The MIMIX/Switch component detects system outages and initiates the MIMIX recovery process. It automatically switches users to an available system where they can continue working without losing information or productivity. The key elements of MIMIX/Switch include: • Logical Switch: This element controls the physical switch, communication and device descriptions, network attributes, APPC/APPN configurations, TCP/IP attributes, and timing of the communication switchover. The features of the Logical Switch include: – User exits to insert user-specified routines almost anywhere in the command stream to customize the switching process – A message logging feature to send status messages to multiple queues and logs for ensuring the visibility of critical information • Physical Switch: This element automatically and directly communicates with the gang switch controller to create a switchover. The features of the Physical Switch include: – A custom interface to the gang switch controller to switch communication lines directly – An operator interface to facilitate manual control over the gang switch controller – Remote support to initiate a switch through the gang switch controller from a distance – Interface support of twinax, coax, RJ11, RS232, V.24, V.35., X.21, DB9, or other devices that the user can plug into a gang switch • Communications Monitor: This element tracks the configuration object status to aid in automating retry and recovery. An automatic verification loop ensures that MIMIX/Switch only moves users to a backup system when a genuine source system outage occurs. 10.2.4 MIMIX/Monitor The MIMIX/Monitor component combines a command center for the administration of monitor programs and a library of plug-in monitors so the user can track, manage, and report on AS/400 processes. MIMIX/Monitor regulates the system 24 hours a day. It presents all monitor programs on a single screen with a uniform set of commands. This minimizes the time and effort required to insert or remove monitors or change their parameters. The MIMIX/Monitor also accepts other data monitoring tools created by customers and third-party companies into its interface. The user can set the programs included with MIMIX/Monitor to run immediately, continually at scheduled intervals, or after a particular event (for example, a communications restart). MIMIX/Monitor includes prepackaged monitor programs that the user can install to check the levels in an uninterrupted power supply (UPS) backup system, or to evaluate the relationship of MIMIX to the application environment. High availability business partner solutions 119 10.2.5 MIMIX/Promoter The MIMIX/Promoter component helps organizations maintain continuous operations while carrying out database reorganizations and application upgrades, including year 2000 date format changes. It uses data transfer technology to revise and move files to production without seriously affecting business operations. MIMIX/Promoter builds copies of database files record-by-record, working behind the scenes while users maintain read-and-write access to their applications and data. It allows the user to fill the new file with data, change field and record lengths, and, at the same time, keep the original file online. After copying is complete, MIMIX/Promoter moves the new files into production in a matter of moments. This is the only time when the application must be taken offline. Implementing an upgrade also requires promoting such non-database objects as programs and display files. To handle these changes, many organizations use change management tools, some of which can be integrated with MIMIX/Promoter′ s data transfer techniques. 10.2.6 OptiConnect and MIMIX MIMIX/400 integrates OptiConnect for OS/400 support for the IBM high-speed communications link, without requiring separate modules. The combination of MIMIX and OptiConnect provides a horizontal growth solution for interactive applications that are no longer contained on a single machine. OptiConnect delivers sufficient throughput for client/server-style database sharing among AS/400 systems within a data center for corporate use. MIMIX/400 complements the strategy by making AS/400 server data continuously available to all clients. 10.2.7 More Information About Lakeview Technology For more information about the complete Lakeview Technology product line, visit Lakeview Technology on the Internet at: http://www.lakeviewtech.com 10.3 Vision Solutions: About the company Vision Solutions was founded in 1990 by two systems programmers working at a hospital IT staff in California. They recognized the need for a dual systems solution that would exploit the rich OS/400 architecture and provide an application for managing business integrity using dual AS/400 systems. Originally known as Midrange Information Systems, Inc., the name was changed to Vision Solutions, Inc. in July 1996. Today it has grown into an international company with development staff and facilities in the Netherlands, South Africa, and the United States. It employs over 150 people worldwide. 10.3.1 Vision Solutions HAV solutions When you consider the costs of purchasing additional assets in the form of hardware, software, and consulting services to expand the hours of operations, to increase the scope of a business’ growth capability, and to allow greater utilization of a business solution on the AS/400 platform, using dual systems for mirroring the business application system is a prudent business decision. If the 120 High Availability on the AS/400 System: A System Manager’s Guide focus is strictly on the disaster aspects of dual systems, the decision to go with this solution is never quick or easy to make. By expanding the view of why a business must use dual systems to the other advantages of continuous operations support, improved availability from dedicated backup processes and workload balancing, the decision process leads to a wise business project. Vision Solutions supports this effort with its management and integrity facilities that are built directly into Vision Suite. One main advantage of using Vision Suite for your HSA and Continuous Operations requirements is that many of its application integrity features exceed the requirements of most AS/400 mission critical applications today. When considering dual systems, pursue your evaluations with due diligence and use the following criteria: • Integrity: How do you know the backup system is equal to your production? Do you employ more analysts to write additional utilities for monitoring or support that use your existing staff? Or, will this decision software reside in the HSA solution? • Performance: How much data can you push to the other system to minimize in-flight transactions being lost during an unplanned outage using the minimum possible CPU? If your application employs OS/400 Remote Journaling and Clustering, does the HSA vendor demonstrate live support of this capability? • Performance: What happens when your network stops or your backup system fails for an indefinite period of time? Can you catch up quickly to protect your business? • Ease of Use: Can my existing operations staff use this application? • Application Support: As an application evolves and possibly extends its use of the rich OS/400 architecture, will your HSA application be able to support those extensions without customized software? Pursue this criteria for your business needs and commitments with regards to continuous operations and business integrity. 10.3.2 Vision Suite There are three components to Vision Suite: • Object Mirroring System/400 (OMS/400) • Object Distribution System/400 (ODS/400) • System Availability Monitor/400 (SAM/400) The requirements for Vision Suite necessitate the use of the OS/400 Journal function for both OMS/400 and ODS/400. Journaling gives Vision Suite the ability to deliver real time database transactions and event driven object manipulations to a backup AS/400 system. While some AS/400 application environments have journaling already active, the integrated Vision Suite Journal Manager can relieve the user of the journal receiver management function. Figure 32 on page 121 illustrates a typical configuration between a production system and a backup system. The journal receivers provide the input to Vision Suite for replication to the backup Database Server system. 10.3.2.1 OMS/400 This component replicates and preserves the Application Integrity established in your software design. It immediately transfers the transitional changes that occur High availability business partner solutions 121 in your data areas, data queues, and physical files to a backup AS/400 system. As the application database is manipulated by the programs I/O requests, and these operations are recorded in the journal by OS/400, OMS/400 transports the resulting journal entries of those requests to a backup database server in real time. This minimizes any data loss due to an unplanned outage. As shown in Figure 32, the Reader/Sender function takes the journal entry over to the backup system through various communications media supported in OS/400. Any communications media utilizing the SNA or TCP/IP protocols and OptiMover are supported by Vision Suite. Figure 32. Typical Vision Suite configuration The Receiver function places the journal entry into the Router function so that it may be assigned to an Apply Queue. The Apply Queue writes the record image captured in the journal entry into the appropriate data object located on the backup system. All objects in a given Database Server are distributed evenly across multiple Apply Queues provided for that single database. The equal separation of files (according to logical relationships in DB2/400 or applications) ensures an even use of CPU and memory resources among each Apply Queue. 10.3.2.2 Application integrity In addition to the requirement of delivering data efficiently and quickly, OMS/400 manages the synchronization and integrity of the database and data objects by using several background processes that do not require operator control and management. Integrity of your database between each physical file, data area, LAN Production System Backup System System ASP User ASP Journal Receivers SQL Tables, Stream files, objects Reader/Sender Communications Media Receiver System ASP Router Apply Apply SQL Tables, Stream files, objects 122 High Availability on the AS/400 System: A System Manager’s Guide and data queue, along with the applications, is a cornerstone for successful role swaps. Role swap is Vision terminology for moving the business processes, which can be end users, batch jobs, or a combination of both application environments from one AS/400 server to another. Assurances for complete Application integrity on your backup systems allow you to quickly declare disasters instead of hoping for something else to happen. No integrity issue of any type ensures that planned or unplanned outages occur with a quick role swap supporting continuous operations at minimum downtime. Furthermore, the journal receivers generated from all of the database activity impacts the auxiliary storage space in a short time if these objects are not managed. Vision Suite features its Journal Manager function that creates journal environments for your mission critical databases. It can completely control the entire management role of journal receivers on both the production and backup AS/400 server. This ensures that the client is free for other AS/400 maintenance functions. 10.3.2.3 ODS/400 The second component of Vision Suite preserves the Environment Integrity developed for your application by replicating all supported object types of that application. This is an important consideration for any AS/400 system requiring continuous operations. While database transactions are complex and numerous compared to object manipulations, change management of your application environment must be duplicated on the backup system to ensure a timely and smooth role swap (move the business from the production system to the backup system). To ensure Environment Integrity, a user should choose to perform change management on each individual system or use ODS/400 to replicate the various object changes to those systems. Increasingly, object security has heightened the need for ODS/400. In pre-client (or PC Support) days, typical security control was managed through 5250 session menus. However, today’s WAN and Internet/intranet network environments utilize many application tools that are built on ODBC and OLE database interfaces. AS/400 IT staffs must meet the challenge of taking advantage of OS/400 built-in object security. This involves removing public authority from mission critical objects and interjecting the use of authorization lists and group profiles. While the AS/400 system maintains a high-level interface for this work, the interlocking relationships of objects, databases, and users (both local and remote) become complex. ODS/400 maintains this complex environment so that its integrity is preserved on your backup system. In mission critical AS/400 applications, the main focus for continuous operations and high availability is the integration of the database with its associated software and related security object authorizations and accesses. 10.3.2.4 SAM/400 The final component of Vision Suite is SAM/400. Its main purpose is to monitor the production (or source application) system heartbeat and condition the role swap when all contact is lost. It has ancillary functions for keeping unwarranted users from accessing the backup system when it is not performing the production function. It also provides user exits for recovery programs designed by the Professional Services staff for specific recovery and environment requirements. High availability business partner solutions 123 Vision Solutions, Inc. products operate on two or more AS/400 systems in a network and use mirroring techniques. This ensures that databases, applications, user profiles, and other objects are automatically updated on the backup machines. In the event of a system failure, end users and network connections are automatically transferred to a predefined backup system. The Visions products automatically activate the backup system (perform a role swap) without any operator intervention. With this solution, two or more AS/400 systems can share the workload. For example, it can direct end-user queries that do not update databases to the backup system. Dedicated system maintenance projects are another solution benefit. The user can temporarily move their operations to the backup machine and upgrade or change the primary machine. This High Availability Solution offers an easy and structured way to keep AS/400 business applications and data available 24 hours a day, 7 days a week. The Vision Solutions, Inc. High Availability Solution (called Vision Suite) includes three components: • Object Mirroring System (OMS/400) • Object Distribution System (ODS/400) • System Availability Monitor (SAM/400) The following sections highlight each component. 10.3.3 OMS/400: Object Mirroring System The Object Mirroring System (OMS/400) automatically maintains duplicate databases across two or more AS/400 systems. Figure 33 illustrates the OMS/400 system. This system uses journals and a communication link between the source and target systems. Figure 33. Object Mirroring System/400 User application programs Journal receivers User databases OMS/400 sender User query programs User Space User databases OMS/400 receiver Router Source system Target AS/400 124 High Availability on the AS/400 System: A System Manager’s Guide The features of the OMS/400 component include: • Automatic repair of such abnormal conditions as communication, synchronization, or system failure recoveries • Synchronization of enterprise-wide data by simulcasting data from a source system to more than 9,000 target destinations • User space technology that streamlines the replication process • An optional ongoing validity check to ensure data integrity • Automatic restart after any system termination • Automatic filtration of unwanted entries, such as opens and closes • The power to operate programs or commands from a remote system • The ability to dynamically capture data and object changes on the source system and copy them to the target system without custom commands or recompiles • The option to create an unlimited number of prioritized AS/400 links between systems • Total data protection by writing download transactions to tape • Support of RPG/400 for user presentation, and ILE/C for system access, data transmission, and process application • Full support of the IBM OptiConnect/400 system • Global journal management when a fiber optic bus-to-bus connection is available • The use of CPI-C to increase speed of data distribution using minimal CPU resources 10.3.4 ODS/400: Object Distribution System The Object Distribution System (ODS/400) provides automatic distribution of application software, authority changes, folders and documents, user-profile changes, and system values. It also distributes subsystem descriptions, job descriptions, logical files, and output queue and job queue descriptions. ODS/400 is a partner to the OMS/400 system, and it provides companies with full system redundancy. It automatically distributes application software changes, system configurations, folders and documents, and user profiles throughout a network of AS/400 computers. ODS/400 supports multi-directional and network environments in centralized or remote locations. For maximum throughput, ODS/400 takes advantage of bi-directional communications protocol and uses extensive filters. 10.3.5 SAM/400: System Availability Monitor The System Availability Monitor (SAM/400) can switch users from a failed primary system to their designated secondary system without operator intervention. SAM/400 works in conjunction with OMS/400 and ODS/400, continuously monitoring the source system. In the event of a failure, SAM/400 automatically redirects users to the target system. This virtually eliminates downtime. High-speed communications links, optional electronic switching hardware, and High availability business partner solutions 125 SAM/400 work together to switch users to a recovery system in only a few minutes. SAM/400 offers: • Continuous monitoring of all mirrored systems for operational status and ongoing availability • A fully programmable response to react automatically during a system failure, which reduces the dependence on uninformed or untrained staff • The ability to immediately and safely switch to the target system, which contains an exact duplicate of the source objects and data during a source system failure (unattended systems are automatically protected 24 hours a day, 7 days a week) • User-defined access to the target system based on a specific user class or customized access levels The SAM/400 component offers: • Up to ten alternate communication links for monitoring from the target system to the source system • Automatic initiation of user-defined actions when a primary system failure occurs • Exit programs to allow the operator to customize recovery and operations for all network protocols and implementations Figure 34 illustrates the SAM/400 monitoring process. Figure 34. SAM/400 structure Users are allowed to access applications at the “End” point. 10.3.6 High Availability Services/400 High Availability Services/400 (HAS/400) consists of software and services. The HAS/400 solution is comprised of: SAM/400 Traffic Role Swap Start conversation Yes Source system Target AS/400 SAM/400 Remote switch controller A/B switch panel Is the other system active?? No Do we have a hardware switch ? Call user exit pgm Yes 126 High Availability on the AS/400 System: A System Manager’s Guide • Analysis of the customer environment in terms of system availability needs and expectations, critical business applications, databases, and workload distribution capabilities • An implementation plan written in terms of solution design and the required resources for its deployment • Installation and configuration of the software products and the required hardware • Education for the customer staff on operational procedures • Solution implementation test and validation • Software from Vision Solutions, Inc., as previously described 10.3.7 More information about Vision Solutions, Inc. For more information about Vision Solutions, Inc. products, visit Vision Solutions on the Internet at: http://www.visionsolutions.com © Copyright IBM Corp. 2001 127 Chapter 11. Application design and considerations Applications are regarded as business-critical elements. The viewpoint of systems management is changing from a component view to an application view. Here are a few considerations that are now made at the application level: • The entire application, or parts of the application, must be distributed. • The application has to be monitored to guarantee availability. • Operations, such as scheduling jobs and doing backups, are recommended. • User profiles must be created, given access to applications, changed, and deleted. If a system is unavailable, and rapid recovery is necessary, backups are restored and the system and the database are inspected for integrity. The recovery process could take days for larger databases. This scenario requires improvement in the areas of restore speed and application recovery. Both of these areas can be costly to implement. High speed tape drives are very expensive items and, for very large databases, they may not show enough restore time improvement to meet user demands. Application recovery requires a lot of development effort and is, therefore, very costly. This, in itself, may also degrade availability. To make the application more available, considerable additional processing and I/O should be available. This means the response time degrades unless there is an abundance of computing resources. In the past, application recovery was at the bottom of the list of availability tasks. The size of required systems would be too expensive for the business to justify. These days, with the vast improvements on price and performance power, solutions exist that provide a high level of availability. In addition, businesses are now able to define the cost of system outage more accurately. From a user point-of-view (both a system end user, and a receiver of services), the availability of a system relates to the information available at a given time. A customer holding for a price lookup considers the efficiency of the answer as an indication of the quality of the business. Designing applications for high availability is a comprehensive topic and textbooks have been dedicated to this topic alone. From a high level, some of the considerations are discussed in this chapter. Areas that are covered include application checkpointing design, considerations, and techniques (including CL programs), for the interactive environment. 11.1 Application coding for commitment control You can use commitment control to design an application so that it can be started again if a job, an activation group within a job, or the system ends abnormally. The application can be started again with the assurance that no partial updates are in the database due to incomplete logical units of work from a prior failure. There are numerous documents that describe the use of commitment control and journaling. The OS/400 Backup and Recovery, SC41-5304, contains journaling and commitment control requirements. IBM language-specific manuals include: 128 High Availability on the AS/400 System: A System Manager’s Guide • DB2 for AS/400 SQL Programming, Version 4, SC41-5611 • ILE C for AS/400 Programmer’s Guide, Version 4, SC09-2712 • ILE COBOL for AS/400 Programmer’s Guide, Version 4, SC09-2540 • ILE RPG for AS/400 Programmer’s Guide, Version 4 SC09-2507 These manuals contain information about using commitment control for a particular language. Various Redbooks and “how to” articles are found throughout IBM-related web sites. These sites include: • http://www.news400.com “Safeguard Your Data with RI and Triggers”, Teresa Kan December, 1994 (page 55) • http://www.news400.com “AS/400 Data Protection Methods”, Robert Kleckner, December 1993 (page 101) 11.2 Application checkpointing In general, application checkpointing is a method used to track completed job steps and pick up where the job last left off before a system or application failure. Using application checkpointing logic, along with commitment control, you can provide a higher lever of resiliency in both applications and data, regardless of whether they are mission critical in nature. Throughout the existence of IBM midrange systems, application checkpointing has been used to help recover from system or application failures. It is not a new subject when it comes to IBM midrange computing, specifically on the AS/400 system. However, there are some new features. Unlike commitment control, application checkpointing has no system level functions that can be used to automate recovery of an application. If commitment control is used, and the job stream has multiple job steps, the application needs to know which job has already run to completion. Application checkpoints help the programmers to design recovery methods that can prevent the restarted jobs from damaging the database by writing duplicate records. Remember that there is additional information written on concepts and methods for application checkpoints, as well as recovery with or without journaling and commitment control. The following sections define recovery methods for applications that work in any high availability environment for the AS/400 system (including clustering). Most of the concepts also work for other (non-AS/400) platforms. 11.3 Application checkpoint techniques Techniques for application checkpointing and recovery vary for every program. Whether you use Cobol, RPG, SQL or C as the application language, the methods employed for application checkpointing remain constant. Without journaling and commitment control capabilities, programmers devise their own tracking and recovery programs. This section describes an example of how this is done. Application design and considerations 129 11.3.1 Historical example The following scenario describes a customer environment that runs the sales force from a System/36 in the early 1980s. The customer’s remote sales representatives dial into a BBS bulletin board system from their home computer, upload the day’s orders, and request the sales history for the clients they are to visit the next day. The next morning, the same remote sales representative dials into the BBS to retrieve the requested information. BBS systems and modems were not reliable in the early 1980s. Many of the transfers ended abnormally. Application programmers devised a method to update data areas with specific job step information after the job steps completed. If the Operator Control Language (OCL) job starts and finds an error code in the data area, the program logic jumps to the last completed step (as indicated by the data area) and starts from there. This is a primitive form of application checkpointing, but it works. Later applications utilized log files. Programs were designed to retrieve information from the log file if the last job step was not successful. Using the program name, last completed job step, current and next job step, as well as the total job steps, the programmer determined where to start the program. The program itself contained recovery subroutines to process if the recovery data area contained information that a job failure occurred. As for the data, temporary files were created containing the before image of a file. Reading the required record, then writing the temporary file prior to any updates or deletes, the information was written back to its original image if the job failed. At the completion of the job, all temporary files were removed. With the high availability products on the market today, a more efficient design is possible. A permanent data file includes the data area logic. High availability products mirror data areas and data queues. However, the HA applications work off of journal information. Note: Data areas and data queues can not be journaled in OS/400 V4R4. Moving any checkpoint logic from a data area to a data file operates with more efficiency and provides a higher restart capability than data areas. 11.4 Application scenarios The following paragraphs explain application checkpointing methods in various scenarios. The methods described are not the only possible options for application checkpoints. However, they provide a good starting point for managing your high availability environment with application checkpoints that work in any HA environment. 11.4.1 Single application For a single application, checkpoints are established by adding recovery logic to the program to handle the commit and roll back functions. The job’s Control Language (CL) program needs to include checks for messages that indicate an incomplete or open Logical Unit of Work (LUW). 130 High Availability on the AS/400 System: A System Manager’s Guide Testing for incomplete job runs is the primary requirement of application checkpoints. Some simple testing of control information for an error code prior to running the start or end commit command prevents users from getting erroneous messages. If the control information is clean, run the Start Commitment Control (STRCMTCTL) function and change the control information to uncompleted. If the control information has an error code, act on it by performing either a commit or rollback. Most often, the action is a rollback. At the completion of the program, execute a commit and then change the control information to indicate a successful update. 11.4.2 CL program example This example uses CL programs. It is assumed that the Logical Unit of Work (LUW) includes all I/O operations that this program performs. If a rollback takes place, all changes are removed from the system. If there is a higher complexity to the application, such as multiple levels of application calls, or many updates, inserts, and deletes, you should consider this a multiple application program. Also, in this example, a data area is used for the control information. For High Availability, it is recommended that you have the control information in a data file. Mirroring record information is more efficient than data areas or data queues because the current release of OS/400 (V4R4) does not support journaling data areas or data queues. Successful and complete recovery from a system failure is more likely if the recovery information is contained in a mirrored file. If this job is used in a multi-step job stream, place true application checkpoint functions into it. To do this, create a checkpoint-tracking file. The checkpoint-tracking file used to track the job steps must include information about the job and where to start. PGM DCL &OK *CHAR 1 RTVDTAARA CONTROL &OK 1 1 IF COND(&OK *NE ‘ ‘) THEN(ROLLBACK) CHGDTAARA &OK ‘E’ STRCMTCTL CALL UPDPGM RTVDTAARA CONTROL &OK 1 1 IF COND(&OK *NE ‘ ‘) THEN(ROLLBACK) IF COND(&OK *EQ ‘ ‘) THEN(COMMIT) ENDCMTCTL CHGDTAARA &OK ‘ ’ ENDPGM Basic CL program model 131 Chapter 12. Basic CL program model The following model contains the basic information for most jobs. Additional information can and should be tracked for better recovery: * Program Name * Current Job Step * Previous Job Step * Next Job Step * Total Job Steps * Job Start time * Job Name * User * Last processed record key information The information listed here should help you determine where you are, where you were, and where to go next. You can also determine how far into the job you are and who should be notified that the job was interrupted. 12.1 Determining a job step The diagrams shown in Figure 35 and Figure 36 illustrate how to determine a job step. Figure 35. Determining a job step (Part 1 of 3)) Figure 36. Determining a job step (Part 2 of 3) All programs have some basic flow. Using Commitment Control, the data is protected with the LUW, Commit, and Rollback. The diagram shown in Figure 37 on page 132 shows these components. Open Read Calculate Write EOJ Return Open file Read file Modify data Write file EOJ Return CC/LUW No 132 High Availability on the AS/400 System: A System Manager’s Guide Figure 37. Determining a job step (Part 3 of 3) To determine the key points for the job step markers, notice how the data is read, manipulated, and written. Also look at any end-of-job processing. If the job fails while processing the data, start at the last completed processed record. This means that the section where the data is read into the program is a “key point” for your job step. Most applications read the data first, so this is the first job step. If there is a section of the program prior to the read that must be recalculated, it should be the first point of the job step. After determining where the data is read, look at where the data is written. In this example, commitment control provides the second key point. When the data is committed to a disk, it can’t be changed. By placing the next job step at this point, a restart bypasses any completed database changes and moves to the next portion of the program. If the program writes thousands of records, but performs a commit at every 100, the checkpoint tracking information should include some key elements for the last committed record. This information should be collected for step 1 after every commit. With the information collected this way, a restart can go to step 1 and set the initial key value to start reading at the last committed point. End of job processing can cause many application restart attempts to fail. The reasons for this should appear obvious. If the job does not have an end of job (EOJ) summary or total calculations to perform, then step 2 is the last job step. However, if summary reports and total calculations must be performed (for example, for an invoice application), some added logic is needed. Most summary or total calculations are performed on data that is collected and calculated from a previously performed file I/O. They are usually stored in a table or in an array in memory. If the job fails before EOJ calculations are made, memory allocated for the job is released. Take this into consideration when working with commitment control’s logical unit of work. The logic for determining the recovery job steps may need to be “Start everything over”. With improper commitment control logic, data loss is possible. If, on the other hand, the job is extremely long running or is a critical job, write the summary information into a work file. The process of completing a job step and writing out the work file occurs before EOJ processing and can be considered its own job step. If you use a work file for array or table information, place the name of the work file in the tracking file as well. Include restart logic in the program to reload any tables or arrays for EOJ processing. Open file Read file Modify data Write file EOJ Return CC/LUW No Step 1 Step 2 Step 3 Basic CL program model 133 Keep in mind that high availability solution providers can track and mirror files that are created and deleted on the fly. However, processing overhead is much higher and the chance of “in-flight transactions” can occur at the time of failure. This results in lost data. Therefore, it is recommended that work files be created and maintained instead of created and deleted. Using the Clear Physical File Member (CLRPFM) function on the AS/400 system maintains the journal link with the file and reduces the overhead of saving and restoring file information caused by a create and delete. Create temporary files in the QTEMP library on the AS/400 system. QTEMP can not be mirrored by high availability solution providers. This library can contain true temporary files in which the contents can be easily recreated. If a job fails, temporary files do not affect the program outcome. The last checkpoint in the job should take place immediately before the end of, or return to, the previous job step. This checkpoint can clear any error flags, reset the program name, or remove the tracking record from the checkpoint tracking file. Creating logging functions for history reporting can also be performed from this checkpoint. 12.1.1 Summary of the basic program architecture The examples and logic defined in 11.3, “Application checkpoint techniques” on page 128, work within the AS/400 system on all release levels of OS/400 from Version 1 onward. If you plan on moving to a clustered environment available from IBM (from OS/400 V4R4 onward), the model described here provides a good start for cluster ready. Add additional tracking information to start the users at a specific point in the application, such as tracking screens and cursor positions. When writing out the checkpoint-tracking file, reserve space for this information so the recovery process can place the user as close to where they left off as possible. Information for screens and cursor positions, as well as job information, user information, and file information can be retrieved from the Information Data Structures (INFDS) provided by OS/400. A wealth of information can be retrieved from the INFDS, including screen names, cursor positions, indicator maps, job information, and more. Part of the cluster ready requirements is to restart the users where they last left off. Using application checkpoints accomplishes this. 12.2 Database At the core of the AS/400e system is the relational database. When multiple systems are involved, a portion or a copy of the data can reside on the remote system. This is called a distributed database. 12.2.1 Distributed relational database Application coding for distributed databases require more involved and complicated logic than programs designed for accessing data on a single system. Application checkpoint logic adds further complexity to the program logic. Both of these situations require a seasoned programmer analyst. This should be someone with patience, persistence, and experience. This section addresses 134 High Availability on the AS/400 System: A System Manager’s Guide some of the concerns of programming in a distributed relational database environment. Note: Using the ideas and concepts discussed in this chapter, and deploying them into any distributed application model, does not change the amount of work for the programmer. Application checkpointing is not concerned with where the data is. Rather, it is concerned with where the application runs. If you use DRDA as the basis of the distributed database, the applications reside with the data on all systems. DRDA submits a Unit of Work (UoW) that executes a sequence of database requests on the remote system. In other words, it calls a program and returns the information. DRDA uses a two-phase commit for the database. This means that the data is protected from failure over the communications line, as well as system or job failures. If the program on the remote system performs proper application checkpointing when the remote system, remote application, or communications, fails during the processing of UoW, the restart picks up from where it left off. Adding a “from location” field and “to location” field in the checkpoint-tracking file allows reports information that better defines the locations of the jobs running. It also helps isolate communication fault issues with the application modules that start and stop communications. It is recommended that application checkpointing in a DRDA environment be setup in a modular fashion. A recovery module that controls the reads and writes to the checkpoint-tracking file make analysis and recovery easier. Take special care to ensure that the database designers include the checkpoint-tracking file in the original database design. This enables the recovery module to treat each existence of the tracking file as an independent log for that local system. A future release of DB2/400 UDB Extended Enterprise Edition (EEE) will include scalability for Very Large Database (VLDB) support. Like DRDA, the VLDB database is distributed over different systems. Unlike DRDA, the access to the data is transparent to the program. No special communication modules or requester jobs need to be created and maintained. Since the environment looks and feels like a local database, the application checkpointing logic must treat it like a local database running in the multiple application mode. For more details on the workings of DB2/400 and DRDA, refer to DB2/400 Advanced Database Functions, SG24-4249. 12.2.2 Distributed database and DDM You can use DDM files to access data files on different systems as a method of having a distributed database. When the DDM file is created, a “shell” of the physical file resides on the local system. The shell is a pointer to the data file on the remote system. The program that reads from, and writes to, this file does not know the data is located on a different system. Since DDM files are transparent to the application, approach application checkpointing logic as discussed in 11.4.1, “Single application” on page 129. Basic CL program model 135 12.3 Interactive jobs and user recovery The information and logic necessary to recover from a system or application failure is no different between interactive jobs and batch jobs. There is a difference in how much user recovery is needed. There are three basic parts to every job: • Data • Programs • Users Use commitment control and journaling to address data recovery. The file layout for the checkpoint-tracking file described previously has a space reserved for the user name. This user name field can be used to inform the user that the job was abnormally ended and that a restart or recovery process will run. If this were an interactive job with screen information, the chance of the user getting back to where they left off is not high. To correct this deficit, the recovery process tracks more information to place the user as close to the point they left off as possible. The addition of current screen information, cursor positions, array information, table pointers, and variables can be stored in the tracking file along with all the other information. The AS/400 system stores all of the information it needs to run the application accessible by the programmer. The programmer simply needs to know where to find it. The Information Data Structure (INFDS) in RPG contains most, if not all, of the information required to get the user back to the screen from where they left off. Use the Retrieve Job Attribute (RTVJOBA) command to retrieve job attributes within the CL of the program. Retrieve system values pertinent to the job with the Retrieve System Value (RTVSYSVAL) command. Internal program variables are in control of the application programmer. Recovery logic within the application can retrieve the screen and cursor positions, the run attributes, and system values and write them to the tracking file along with key array, table, and variable information. In the event of a failure, the recovery logic in the program determines the screen the user was on, what files were open, what the variable, array, and key values were, and even place the cursor back to the last used position. 12.4 Batch jobs and user recovery and special considerations Unlike interactive jobs, batch jobs have no user interface that needs to be tracked and recovered. Since the user interface is not a concern, the checkpoint tracking process defined in 11.4.1, “Single application” on page 129, and 12.2.1, “Distributed relational database” on page 133, should suffice. In general, this is true. This section describes additional vital information about when the recovery environment includes high availability providers software. These considerations include: 136 High Availability on the AS/400 System: A System Manager’s Guide • Job queue information for the batch jobs cannot be mirrored: This means that if you submit multiple related jobs to a single threaded batch queue, and the system fails before all those jobs are completed, restarting may not help. • Determining what jobs have been completed and what jobs still need to run: If you have created a checkpointing methodology with the points described previously, you have a tracking record of the job that was running at the time of the failure. Using this information, restart that job and then manually submit the remaining jobs to batch. If this was a day-end process, determining the jobs that still need to be run should not be complicated. If it was a month-end process, the work to restart all the jobs consumes more time but it can be achieved with few or no errors. If this was a year-end process, the work to restart all the necessary jobs in the correct order without missing vital information can be very time consuming. A simple solution is to store tracking information for batch jobs in the application checkpoint-tracking file. The added work in the recovery file is minimal, yet the benefits are exponential. Within the job that submits the batch process, a call to the application checkpointing job name, submitted time, submitted queue, and submitted status is ideal. The application checkpoint module writes this information to the checkpoint-tracking file. When the job runs in batch mode, it modifies the status to “Active”. Upon completion, remove the record or mark as was done in the tracking file to complete a log of submitted jobs. Within any high availability environment, the tracking information is processed almost immediately. In the event of a system failure, the recovery module interrogates the tracking file for submitted jobs that are still in a JOBQ status and automatically resubmit them to the proper job queue in the proper order (starting with the job with an “Active” status). This possibility prevents any user intervention, therefore eliminating “user error”. 12.5 Server jobs The nature of a server job is very robust. To maintain reliability, server jobs should be able to recover from most, if not all, error conditions that can cause normal jobs to fail. Since server jobs run in a batch environment, the recovery process itself is identical to the batch process described in 12.4, “Batch jobs and user recovery and special considerations” on page 135. However, additional considerations for error recovery are necessary for the recovery file. With application checkpointing built into the server job, error conditions can be logged in the tracking file and corrections to either the server job or the client jobs can be made based on what is logged. Using application checkpoints to isolate faults and troubleshoot error conditions is an added advantage to a well-designed recovery process. If the server job fails, connection to the client can still exist. If this open connection is not possible, there may be a way to notify the client to re-send the requested information or Unit-of-Work (UoW). Either way, the server job must track the information in the checkpoint-tracking file. Basic CL program model 137 12.6 Client Server jobs and user recovery Client Server jobs come in many different models, including thin clients, fat clients, and other clients that include attributes of both fat and thin clients. Even though they have a different label, these client server jobs are either a batch job or an interactive job. The environment that the job runs in dictates the type of recovery to perform. Most client server applications rely on the client to contact the server to request information from the server. Thin clients contact the server and pass units of work for the server to perform and report back. Fat clients request data and process the information themselves. “Medium” clients perform various aspects of each method. Recovery for the client server jobs should be mutually exclusive. If the client job fails, the connection to the server can still exist. The client may be able to pick up where it left off. If this open connection is not possible, there may be a way to notify the server to re-send the request for information. Either way, the client job must track the information in the checkpoint-tracking file. If the processing of the client request pertains to a long running process, it may be best to design that particular job as a thin client. With a thin client design, the processing is performed on the server side where application checkpointing tracks and reports the job steps. In this case, recovery on the client includes checking whether the communications is still available. If not, then submit the request again from the beginning. If the processing of the client pertains to critical information, the design should lean towards the fat client model. If the client is a fat client model, the application checkpointing logic described in this book should suffice. Note: The nature of a client server relationship varies greatly. It is worth the time to determine whether the recovery process in a client server environment is necessary prior to writing the recovery steps. Thin clients perform much faster in a restart mode if they are simply started again with absolutely no recovery logic. For example, if a client process makes one request to the server for information, adding recovery logic can double or even triple the amount of time required to make that request. 12.7 Print job recovery In the standard program model, information is collected, processed, and written. The process of writing the information typically occurs during the end of job (EOJ), after all the data is collected. In this case, adding a checkpoint at the beginning of the EOJ processing restarts the printing functions in the event of a restart. The scenario is described in “Distributed relational database” on page 133. Exceptions to this rule include programs that collect and write “detailed” information as the job runs. Again, 12.2.1, “Distributed relational database” on 138 High Availability on the AS/400 System: A System Manager’s Guide page 133, describes how a job step defined at the proper locations recreates every function within the steps. Even with a detailed and proper running recovery function in place, there is no way to “pick up where you left off” in a print job. The print file itself is closed when the job runs. Rewriting to it is not possible. With proper application checkpoints in place, the print information should not be lost. It is, however, duplicated to some extent. If a print job within that application requires a specific name, that name should be tracked in the checkpoint-tracking file and proper cleanup should be performed prior to the job running again. © Copyright IBM Corp. 2001 139 Part 4. High availability checkpoints Part IV discusses miscellaneous items that are helpful when implementing a high availability solution. Included in this part is information on a Batch Caching solution, a discussion of the management of disk storage, device parity features, and a checklist of items to consider when implementing your high availability solution. 140 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 141 Appendix A. How your system manages auxiliary storage For many businesses, computers have replaced file cabinets. Information critical to a business is stored on disks in one or more computer systems. To protect information assets on your AS/400 system, you need a basic understanding of how it manages disk storage. On the AS/400 system, main memory is referred to as main storage. Disk storage is called auxiliary storage. Disk storage may also be referred to as DASD (direct access storage device). Many other computer systems require you to take responsibility for how information is stored on disks. When you create a new file, you must tell the system where to put the file and how big to make it. You must balance files across different disk units to provide good system performance. If you discover later that a file needs to be larger, you need to copy it to a location on the disk that has enough space for the new larger file. You may need to move other files to maintain system performance. The AS/400 system is responsible for managing the information in auxiliary storage. When you create a file, you estimate how many records it should have. The system places the file in the location most conducive for good performance. In fact, it may spread the data in the file across multiple disk units. When you add more records to the file, the system assigns additional space on one or more disk units. The system uses a function that is called virtual storage to create a logical picture of how the data looks. This logical picture is similar to how data is perceived. In virtual storage, all of the records that are in a file are together (contiguous), even though they may be physically spread across multiple disk units in auxiliary storage. The virtual storage function also keeps track of where the most current copy of any piece of information is (whether it is in main storage or in auxiliary storage). Single-level storage is a unique architecture of the AS/400 system that allows main storage, auxiliary storage, and virtual storage to work together accurately and efficiently. With single-level storage, programs and system users request data by name rather than by where the data is located. Disk storage architecture and management tools are further described in AS/400 Disk Storage Topics and Tools, SG24-5693. A.1 How disks are configured The AS/400 system uses several electronic components to manage the transfer of data from a disk to main storage. Data must be in main storage before it can be used by a program. 142 High Availability on the AS/400 System: A System Manager’s Guide Figure 38. Components used for data transfer Figure 38 shows the components that are used for data transfer. The components include: • Bus: The bus is the main communications channel for input and output data transfer. A system may have one or more buses. • I/O processor: The input/output processor (IOP) is attached to the bus. The IOP is used to transfer information between main storage and specific groups of controllers. Some IOPs are dedicated to specific types of controllers, such as disk controllers. Other IOPs can attach more than one type of controller (for example, tape controllers, and disk controllers). • Disk controller: The disk controller attaches to the IOP and handles the information transfer between the IOP and the disk units. Some disk units have built-in controllers. Others have separate controllers. • Disk unit: Disk units are the actual devices that contain the storage units. Hardware is ordered at the disk-unit level and each disk unit has a unique serial number. A.2 Full protection: Single ASP A simple and safe way to manage and protect your auxiliary storage is to perform the following tasks: • Assign all disk units to a single auxiliary storage pool (the system ASP). • Use device parity protection for all disk units that have the hardware capability. • Use mirrored protection for the remaining disk units on the system. With this method, your system continues to run even if a single disk unit fails. When the disk is replaced, the system can reconstruct the information so that no data is lost. The system may also continue to run when a disk-related hardware component fails. Whether your system continues to run depends on your B U S Processor Main Storage Input/Output Processor (IOP) Disk Controller Disk Unit Storage Unit Input/Output Processor (IOP) Disk Controller Disk Unit Storage Unit How your system manages auxiliary storage 143 configuration. For example, the system continues to run if an IOP fails and all of the attached disk units have mirrored pairs that are attached to a different IOP. When you use a combination of mirrored protection and device parity protection to fully protect your system, you increase your disk capacity requirements. Device parity protection requires up to 25% of the space on your disk units to store parity information. Mirrored protection doubles the disk requirement for all disks that do not have the device parity protection capability. Figure 39 shows an example of a system with full protection. The system has 21 disk units. All of the disk units are assigned to the system ASP. The system assigns unit numbers to each configured disk on the system. Notice that the mirrored pairs share a common unit number. Figure 39. Full protection: Single ASP A.3 Full protection: Multiple ASPs You may want to divide your disk units into several auxiliary storage pools. Sometimes, your overall system performance may improve by having user ASPs. For example, you can isolate journal receivers in a user ASP. Or, you can place history files or documents that seldom change in a user ASP that has lower performance disk units. You can fully protect a system with multiple ASPs by performing the following tasks: • Use device parity protection for all disk units that have the hardware capability. • Set up mirrored protection for every ASP on the system. You can set up mirrored protection even for an ASP that has only disk units with device parity System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 3 Unit 2 Unit 2 Unit 4 Unit 4 Unit 1 6602 9336 9336 6602 6602 9336 9336 Unit 5 Unit 6 Unit 7 Unit 8 9337 Unit 13 Unit 14 Unit 16 9337 6603 Unit 11 Unit 10 Unit 15 Unit 17 Unit 12 Unit 9 Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection 144 High Availability on the AS/400 System: A System Manager’s Guide protection. That way, if you add units that do not have device parity protection in the future, those units are automatically mirrored. Note: You must add new units in pairs of units with equal capacity. Before configuring this level of protection, be sure that you know how to assign disk units to ASPs. Figure 40 shows an example of two ASPs. Both ASPs have device parity protection and mirrored protection defined. Currently, ASP 2 has no mirrored units. Figure 40. Full protection: Multiple ASPs A.4 Partial protection: Multiple ASPs Sometimes, full protection (using a combination of device parity protection and mirrored protection) may be too costly. If this happens, you need to develop a strategy to protect the critical information on your system. Your objectives should be to minimize the loss of data and to reduce the amount of time in which critical applications are not available. Your strategy should involve dividing your system into user ASPs and protecting only certain ASPs. Note, however, that if the system is not fully protected and an unprotected disk unit fails, serious problems can occur. The entire system can become unusable, end abnormally, require a long recovery, and data in the ASP that contains the failed unit must be restored. Before configuring this level of protection, be sure that you know how to assign disk units to ASPs. System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 3 Unit 2 Unit 2 Unit 4 Unit 4 Unit 1 6602 9336 9336 6602 6602 9336 9336 Unit 5 Unit 6 Unit 7 Unit 8 9337 Unit 13 Unit 14 Unit 16 6603 Unit 10 Unit 11 9337 Unit 15 Unit 17 Unit 12 Unit 9 ASP 2 Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection How your system manages auxiliary storage 145 The following list provides suggestions for developing your strategy: • If you protect the system ASP with a combination of mirrored protection and device parity protection, you can reduce or eliminate recovery time. The system ASP, and particularly the load source unit, contain information that is critical to keeping your system operational. For example, the system ASP has security information, configuration information, and addresses for all the libraries on the system. • Think about how you can recover file information. If you have online applications, and your files change constantly, consider using journaling and placing journal receivers in a protected user ASP. • Think about what information does not need protection. This is usually information that changes infrequently. For example, history files may need to be online for reference, but the data in the history files may not change except at the end of the month. You could place those files in a separate user ASP that does not have any disk protection. If a failure occurs, the system becomes unusable, but the files can be restored without any loss of data. The same may be true for documents. • Think about other information that may not need disk protection. For example, your application programs may be in a separate library from the application data. It is likely the case that the programs change infrequently. The program libraries could be placed in a user ASP that is not protected. If a failure occurs, the system becomes unusable, but the programs can be restored. Two simple guidelines can summarize the previous list: 1. To reduce recovery time, protect the system ASP. 2. To reduce loss of data, make conscious decisions about which libraries must be protected. Figure 41 on page 146 shows an example of three ASPs. ASP 1 (system ASP) and ASP 3 have device parity protection and mirrored protection defined. Currently, ASP 3 has no mirrored units and ASP 2 has no disk protection. In this example, ASP 2 could be used for history files, reference documents, or program libraries. ASP 3 could be used for journal receivers and save files. 146 High Availability on the AS/400 System: A System Manager’s Guide Figure 41. Different protection for multiple ASPs System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 4 Unit 2 Unit 2 Unit 12 Unit 12 Unit 1 6602 9336 9336 6602 6602 9336 9336 Unit 5 Unit 6 Unit 7 Unit 8 9337 Unit 13 Unit 14 Unit 16 6603 Unit 10 Unit 11 9337 Unit 15 Unit 17 Unit 12 Unit 9 ASP 2 ASP 3 Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection © Copyright IBM Corp. 2001 147 Appendix B. Planning for device parity protection If you intend to have a system with data loss protection and concurrent maintenance repair, plan to use one of the following configurations: • Mirrored protection and device parity protection to protect the system ASP. • Mirrored protection for the system ASP and device parity protection for user ASPs. • Mirrored protection and device parity protection to protect the system ASP and user ASPs. Note: You can use device parity protection with disk array subsystems as well as with input-output processors (IOP). For each device parity protection set, the space that is used for parity information is equivalent to one disk unit. The minimum number of disk units in a subsystem with device parity protection is four. The maximum number of disk units in a subsystem with device parity protection is seven, eight, or 16, depending on the type. A subsystem with 16 disk units attached has two device parity protection sets and the equivalent of two disk units dedicated to parity information. For more information about device parity protection, see Backup and Recovery, SC41-5304. B.1 Mirrored protection and device parity protection to protect the system ASP This section illustrates an example of a system with a single auxiliary storage pool (ASP). The ASP has both mirrored protection and device parity protection. When one of the disk units with device parity protection fails, the system continues to run. The failed unit can be repaired concurrently. If one of the mirrored disk units fails, the system continues to run using the operational unit of the mirrored pair. Figure 42 on page 148 shows an example of mirrored protection and device parity protection used in the system ASP. 148 High Availability on the AS/400 System: A System Manager’s Guide Figure 42. Mirrored protection and device parity protection to protect the system ASP B.2 Mirrored protection in the system ASP and device parity protection in the user ASPs You should consider device parity protection if you have mirrored protection in the system ASP and are going to create user ASPs. The system can tolerate a failure in one of the disk units in a user ASP. The failure can be repaired while the system continues to run. Figure 43 shows an example of a system ASP with device parity. Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 3 Unit 2 Unit 2 Unit 4 Unit 4 Unit 5 Unit 1 Unit 6 Unit 7 Unit 8 6602 9336 9336 6602 6602 9336 9336 9337 Unit 13 Unit 14 Unit 15 Unit 16 Unit 17 6603 Unit 9 Unit 10 Unit 11 Unit 12 9337 Planning for device parity protection 149 Figure 43. Mirrored protection in the system ASP and device parity protection in the user ASPs B.2.1 Mirrored protection and device parity protection in all ASPs If you have all ASPs protected with mirrored protection, to add units to the existing ASPs, also consider using device parity protection. The system can tolerate a failure in one of the disk units with device parity protection. The failed unit can be repaired while the system continues to run. If a failure occurs on a disk unit that has mirrored protection, the system continues to run using the operational unit of the mirrored pair. Figure 44 on page 150 shows an example of mirrored protection and device parity protection in all ASPs. System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 3 Unit 2 Unit 2 Unit 4 Unit 4 Unit 5 Unit 1 Unit 6 Unit 7 Unit 8 6602 9336 9336 6602 6602 9336 9336 9337 Unit 9 Unit 10 Unit 11 Unit 12 9337 User ASP 2 Unit 13 Unit 14 Unit 15 Unit 16 Unit 17 6603 User ASP 3 Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection 150 High Availability on the AS/400 System: A System Manager’s Guide Figure 44. Mirrored protection and device parity protection in all ASPs B.2.2 Disk controller and the write-assist device The disk controller for the subsystems with device parity protection performs an important function for write operations. The controller keeps a list of all uncommitted data written to the write-assist device that has not been written to the data disk or the parity disk. Use this list during a power failure on the AS/400 system. Write requests and the write-assist device A write request to the subsystems with device parity protection starts three write operations. Data to be written to the disk units is first stored in the buffer in the disk controller. From this buffer, the data is sent to the write-assist device, the data disk, and the parity disk. The following actions occur during a write request: 1. A write operation to the write-assist device: Data is written to the write-assist device sequentially. A write operation to the write-assist device does not require parity calculation. The disk controller (identifier and disk address) adds the header information. Trailing information is added for the data before writing to the write-assist device. The header information can be used during a power failure. Normally, the write operations to the write-assist device are completed before the write operations to the disk units. The disk controller sends a completion message to storage management that allows the application to continue. The System ASP Load Source 6602 Unit 1 Unit 1 Unit 3 Unit 3 Unit 2 Unit 2 Unit 4 Unit 4 Unit 5 Unit 1 Unit 6 Unit 7 Unit 8 6602 9336 9336 6602 6602 9336 9336 9337 Unit 9 Unit 9 Unit 10 Unit 10 Unit 11 Unit 12 Unit 13 Unit 14 Unit 15 9337 6603 User ASP 2 Legend Unit n - Unit n = Mirrored pair Unit n Unit protected by device parity protection Planning for device parity protection 151 data that is written on the write-assist device is marked as uncommitted on the disk controller. Note: The write operation to the data disk and the parity disk continues in the background until the data is successfully written and is marked as committed in the disk controller. 2. A write operation to the disk unit. • For data, the operation: – Reads the original data – Writes the new data • For parity data, the operation: – Reads the original parity information – Compares the new data with the original data and the original parity to calculate the new parity – Writes the new parity information The write operation to the data disk usually completes before the write operation to the parity disk. The write operation to the data disk does not have to wait for the parity calculation. The delay between the writing of new data and the writing of the new parity information is known as delayed parity. 3. Data is marked as committed data when it is successfully written to both the data disk unit and the parity disk unit. 4. A completion message is sent to storage management only if the write operation on the write-assist device or the data disk unit has not already sent a message. The performance for this type of write operation depends on disk contention and the time that is needed to calculate the parity information. B.2.3 Mirrored protection: How it works Since mirrored protection is configured by ASP, all ASPs must be mirrored to provide for maximum system availability. If a disk unit fails in an ASP that is not mirrored, the system can’t be used until the disk unit is repaired or replaced. The start mirrored pairing algorithm automatically selects a mirrored configuration to provide the maximum protection at the bus, I/O processor, or controller level for the hardware configuration of the system. When storage units of a mirrored pair are on separate buses, they have maximum independence or protection. Because they do not share any resource at the bus, I/O processor, or controller levels, a failure in one of these hardware components allows the other mirrored unit to continue operating. Any data written to a unit that is mirrored is written to both storage units of the mirrored pair. When data is read from a unit that is mirrored, the read operation can be from either storage unit of the mirrored pair. Because it is transparent to the user, they don’t know from which mirrored unit the data is being read. The user is also not aware of the existence of two physical copies of the data. If one storage unit of a mirrored pair fails, the system suspends mirrored protection to the failed mirrored unit. The system continues to operate using the 152 High Availability on the AS/400 System: A System Manager’s Guide remaining mirrored unit. The failing mirrored unit can be physically repaired or replaced. After the failed mirrored unit is repaired or replaced, the system synchronizes the mirrored pair by copying current data from the storage unit that has remained operational to the other storage unit. During synchronization, the mirrored unit to which the information is being copied is in the resuming state. Synchronization does not require a dedicated system and runs concurrently with other jobs on the system. System performance is affected during synchronization. When synchronization is complete, the mirrored unit becomes active. © Copyright IBM Corp. 2001 153 Appendix C. Batch Journal Caching for AS/400 boosts performance In the year 2000, a Programming Request for Price Quotation (PRPQ) offering became available to improve the performance of an AS/400e system when journals are involved. It is known as Batch Journal Caching for AS/400 PRPQ, and the order number is 5799-BJC. It installs and runs correctly on any national language version. C.1 Overview The Batch Journal Caching for AS/400 PRPQ can provide a significant performance improvement for batch environments that use journaling. Benefits include: • It changes the handling of disk writes to achieve the maximum performance for journaled database operations. • By caching journal writes in main memory, it can greatly reduce the impact of journaling on batch run time by eliminating the delay in waiting for each journal entry to be written to disk. This PRPQ is an ideal solution for customers with batch workloads who use journaling as part of a high availability solution to replicate database changes to a backup system. C.2 Benefits of the Batch Journal Caching PRPQ Applications that perform large numbers of database add, update, or delete operations should experience the greatest improvement when this PRPQ is active. Although it is directed primarily toward batch jobs, some interactive applications may also benefit. Applications using commitment control should see less improvement because commitment control already performs some journal caching. With traditional non-cached journaling in a batch environment, each database record added, updated, or deleted by the batch job causes a new journal entry to be constructed in main memory. The batch job then waits for each new journal entry to be written to disk to assure recovery. This results in a large number of disk writes. The Batch Journal Caching PRPQ provides the ability to selectively enable a new variation of journal caching. It changes the handling of disk writes to achieve the maximum performance for journaled database operations. Both the journal entries and the corresponding database records are cached in main memory, thereby delaying writing journal entries to disk until an efficient disk write can be scheduled. This prevents most database operations from being held up while waiting for the synchronous write of journal entries to disk. By more aggressively caching journal writes in main memory, it can: • Greatly reduce the impact of journaling on batch run time by reducing the delay in waiting for each journal entry to be written to disk. 154 High Availability on the AS/400 System: A System Manager’s Guide • Avoid the problems and costs associated with making application changes (such as adding commitment control) to improve batch performance in these environments. C.2.1 Optimal journal performance For optimal journal performance, many factors beyond using this PRPQ should be considered, including: • The number and type of disk units and disk controllers • Amount of write cache • Placement of journal receivers in user auxiliary storage pools (ASPs) • Application changes C.3 Installation considerations The prerequisites and limitations of the Batch Journal Cache PRPQ are identified here. C.3.1 Prerequisites This PRPQ runs under the latest levels of operating system. Install either: • OS/400 V4R5 with PTFs MF24863, MF24866, MF24870, and SF63192 • OS/400 V4R4 with PTFs MF24293, MF24626, and SF63186 C.3.2 Limitations This batch cache type of journaling differs from traditional journaling and can affect the recover ability of the associated database files. Because journal entries are temporarily cached in main memory, a few recent database changes that are still cached and not yet written to disk can be lost in the rare event of a severe system failure where the contents of main memory are not preserved. This type of journaling may not be suitable for interactive applications where single system recovery is the primary reason for using journaling. Also, it may not be suitable where it is unacceptable to lose even one recent database change in the rare event of a system failure in which the contents of main memory are not preserved. Batch journal caching is primarily intended for situations where journaling is used to enable database replication to a second system (for example, for high availability or Business Intelligence applications) under a heavy workload like batch processing. This also applies to heavy interactive work with applications that do not employ commitment control. This function can also be selectively enabled to optimize performance when running nightly batch workloads. It can then disabled each morning to optimize recoverability when running interactive applications. This can speed up some nightly batch jobs without sacrificing robust recovery for daytime interactive work. C.3.3 For more information After installing the PRPQ software product, read the README member of the README AS/400 file in the QBJC library for instructions on this product. Use DSPPFM FILE(QBJC/README) MBR(README) to display the file. For further information, contact your IBM marketing representative. © Copyright IBM Corp. 2001 155 Appendix D. Sample program to calculate journal size requirement D.1 ESTJRNSIZ CL program esj1: pgm dclf estjrnsiz/lastipl call qwccrtec /* retrieve last IPL information */ CPYSPLF FILE(QPSRVDMP) TOFILE(ESTJRNSIZ/LASTIPL) DLTSPLF FILE(QPSRVDMP) loop: rcvf monmsg msgid(CPF0864) exec(goto cmdlbl(endit)) if (%sst(&lastipl 103 17) *ne ’ ’) + then(chgdtaara lastipl %sst(&lastipl 103 17)) goto loop endit: call pfildtl endpgm D.2 NJPFILS RPGLE program FQPRINT O F 132 PRINTER OFLIND(*INOF) USROPN FPFILRPT O E Printer OFLIND(*IN90) D* Include error code parameter D/COPY QSYSINC/QRPGLESRC,QUSEC Dlstlib s 10A Dlstfil s 10A Dipltim s z Dtimipl ds D ccipl 2A D yyipl 2A D sep1 1A INZ(’-’) D mmipl 2A D sep2 1A INZ(’-’) D ddipl 2A D sep3 1A INZ(’-’) D hhipl 2A D sep4 1A INZ(’.’) D nnipl 2A D sep5 1A INZ(’.’) D ssipl 2A D sep6 1A INZ(’.’) D msipl 6A INZ(’000000’) Dlastipl ds D iplmo 1 2A D iplda 4 5A D iplyr 7 8A D iplhr 10 11A D iplmi 13 14A D iplse 16 17A Dtimestamp s z INZ(*SYS) Dqmbrovr s 1A Dqmbrfmt s 8A Dqmbrdovr s 9B 0 INZ(4096) Dnummbrs s 4B 0 Dnumobjs s 4B 0 Dobjtolist s 20 INZ(’*ALL *ALLUSR ’) DFIRST_ERR S 1 INZ(’0’) Dobj_count s 9 0 INZ(1) Dmbr_count s 9 0 INZ(1) Dobjspcnam s 20A INZ(’OBJECTS ESTJRNSIZ ’) Dmbrspcnam s 20A INZ(’MEMBERS ESTJRNSIZ ’) Dext_attr s 10A Dspc_name s 20A Dspc_size s 9B 0 INZ(1) Dspc_init s 1 INZ(x’00’) Dobjlstptr s * Dmbrlstptr s * Dobjspcptr s * Dmbrspcptr s * DARR s 1 BASED(objlstptr) DIM(32767) DRCVVAR s 8 DRCVVARSIZ s 9B 0 INZ(8) DARRm s 1 BASED(mbrlstptr) DIM(32767) DRCVVARm s 8 156 High Availability on the AS/400 System: A System Manager’s Guide DRCVVARSIZm s 9B 0 INZ(8) D* Common list header DOUSH0100 DS BASED(OBJSPCPTR) D OUSUA 1 64 User area D OUSSGH 65 68B 0 Size generic header D OUSSRL 69 72 Structure rel level D OUSFN 73 80 Format name D OUSAU 81 90 API used D OUSDTC 91 103 Date time created D OUSIS 104 104 Information status D OUSSUS 105 108B 0 Size user space D OUSOIP 109 112B 0 Offset input parm D OUSSIP 113 116B 0 Size input parm D OUSOHS 117 120B 0 Offset header secti D OUSSHS 121 124B 0 Size header section D OUSOLD 125 128B 0 Offset list data D OUSSLD 129 132B 0 Size list data D OUSNBRLE 133 136B 0 Number list entries D OUSSEE 137 140B 0 Size each entry D OUSSIDLE 141 144B 0 CCSID list ent D QUSCID 145 146 Country ID D OUSLID 147 149 Language ID D OUSSLI 150 150 Subset list indicat D OUSERVED00 151 192 Reserved D* Common list header DMUSH0100 DS BASED(MBRSPCPTR) D MUSUA 1 64 User area D MUSSGH 65 68B 0 Size generic header D MUSSRL 69 72 Structure rel level D MUSFN 73 80 Format name D MUSAU 81 90 API used D MUSDTC 91 103 Date time created D MUSIS 104 104 Information status D MUSSUS 105 108B 0 Size user space D MUSOIP 109 112B 0 Offset input parm D MUSSIP 113 116B 0 Size input parm D MUSOHS 117 120B 0 Offset header secti D MUSSHS 121 124B 0 Size header section D MUSOLD 125 128B 0 Offset list data D MUSSLD 129 132B 0 Size list data D MUSNBRLE 133 136B 0 Number list entries D MUSSEE 137 140B 0 Size each entry D MUSSIDLE 141 144B 0 CCSID list ent D MUSCID 145 146 Country ID D MUSLID 147 149 Language ID D MUSSLI 150 150 Subset list indicat D MUSERVED00 151 192 Reserved D* Structure for OBJL0200 DQUSL020002 DS BASED(objlstptr) D QUSOBJNU00 1 10 Object name used D QUSOLNU00 11 20 Object lib name use D QUSOBJTU00 21 30 Object type used D QUSIS01 31 31 Information status D QUSEOA 32 41 Extended object attr D QUSTD06 42 91 Text description D QUSUDA 92 101 User defined attr D QUSERVED22 102 108 Reserved D* File Definition Template (FDT) Header D* This section is always located at the beginning of the returned data. DQDBQ25 DS 4096 Header info D QDBFYRET 1 4B 0 Bytes returned D QDBFYAVL 5 8B 0 Bytes available D*QDBFHFLG 2 D QDBBITS27 9 10 Attribute bytes D QDBBITS1 9 9 D* QDBRSV100 2 BITS D* QDBFHFPL00 1 BIT D* QDBRSV200 1 BIT D* QDBFHFSU00 1 BIT D* QDBRSV300 1 BIT D* QDBFHFKY00 1 BIT D* QDBRSV400 1 BIT D* QDBFHFLC00 1 BIT D* QDBFKFSO00 1 BIT D* QDBRSV500 1 BIT D* QDBFHSHR00 1 BIT D* QDBRSV600 2 BITS D* QDBFIGCD00 1 BIT Sample program to calculate journal size requirement 157 D* QDBFIGCL00 1 BIT D QDBRSV7 11 14 reserved D QDBLBNUM 15 16B 0 # data members D*QDBFKDAT 14 D QDBFKNUM00 17 18B 0 D QDBFKMXL00 19 20B 0 D* QDBFKFLG00 1 D QDBBITS28 21 21 D* QDBRSV802 1 BIT D* QDBFKFCS02 1 BIT D* QDBRSV902 4 BITS D* QDBFKFRC02 1 BIT D* QDBFKFLT02 1 BIT D QDBFKFDM00 22 22 D QDBRSV1000 23 30 keyed seq ap D QDBFHAUT 31 40 public aut D QDBFHUPL 41 41 pref storage unit D QDBFHMXM 42 43B 0 max members D* Maximum Members (MAXMBRS) D QDBFWTFI 44 45B 0 max file wait time D QDBFHFRT 46 47B 0 FRCRATION D QDBHMNUM 48 49B 0 # members D QDBRSV11 50 58 reserved D* Reserved. D QDBFBRWT 59 60B 0 max recd wait time D*QDBQAAF00 1 D QDBBITS29 61 61 add’l attrib flags D* QDBRSV1200 7 BITS D* QDBFPGMD00 1 BIT D QDBMTNUM 62 63B 0 tot # recd fmts D*QDBFHFL2 2 D QDBBITS30 64 65 add’l attrib flags D* QDBFJNAP00 1 BIT D* QDBRSV1300 1 BIT D* QDBFRDCP00 1 BIT D* QDBFWTCP00 1 BIT D* QDBFUPCP00 1 BIT D* QDBFDLCP00 1 BIT D* QDBRSV1400 9 BITS D* QDBFKFND00 1 BIT D QDBFVRM 66 67B 0 1st supported VRM D QDBBITS31 68 69 add’l attrib flags D* QDBFHMCS00 1 BIT D* QDBRSV1500 1 BIT D* QDBFKNLL00 1 BIT D* QDBFNFLD00 1 BIT D* QDBFVFLD00 1 BIT D* QDBFTFLD00 1 BIT D* QDBFGRPH00 1 BIT D* QDBFPKEY00 1 BIT D* QDBFUNQC00 1 BIT D* QDBR11800 2 BITS D* QDBFAPSZ00 1 BIT D* QDBFDISF00 1 BIT D* QDBR11900 3 BITS D QDBFHCRT 70 82 file level indicato D QDBRSV1800 83 84 D QDBFHTXT00 85 134 file text descript D QDBRSV19 135 147 reserved D*QDBFSRC 30 D QDBFSRCF00 148 157 D QDBFSRCM00 158 167 D QDBFSRCL00 168 177 source file fields D* Source File Fields D QDBFKRCV 178 178 access path recover D QDBRSV20 179 201 reserved D QDBFTCID 202 203B 0 CCSID D QDBFASP 204 205 ASP D* X’0000’ = The file is located in the system ASP D* X’0002’-X’0010’ = The user ASP the file is located in. D QDBBITS71 206 206 complex obj flags D* QDBFHUDT00 1 BIT D* QDBFHLOB00 1 BIT D* QDBFHDTL00 1 BIT D* QDBFHUDF00 1 BIT D* QDBFHLON00 1 BIT D* QDBFHLOP00 1 BIT D* QDBFHDLL00 1 BIT 158 High Availability on the AS/400 System: A System Manager’s Guide D* QDBRSV2101 1 BIT D QDBXFNUM 207 208B 0 max # fields D QDBRSV22 209 284 reserved D QDBFODIC 285 288B 0 offset to IDDU/SQL D QDBRSV23 289 302 reserved D QDBFFIGL 303 304B 0 file generic key D QDBFMXRL 305 306I 0 max record len D FMXRL1 305 305A D QDBRSV24 307 314 reserved D QDBFGKCT 315 316B 0 file generic key D field count D QDBFOS 317 320B 0 offset to file scop D array D QDBRSV25 321 328 reserved D QDBFOCS 329 332B 0 offset to alternate D collating sequence D table D QDBRSV26 333 336 reserved D QDBFPACT 337 338 access path type D QDBFHRLS 339 344 file version/releas D QDBRSV27 345 364 reserved D QDBPFOF 365 368B 0 offset to pf speciD fic attrib section D QDBLFOF 369 372B 0 offset to LF speciD fic attrib section D QDBBITS58 373 373 D* QDBFSSCS02 3 BITS D* QDBR10302 5 BITS D QDBFLANG01 374 376 D QDBFCNTY01 377 378 sort sequence table D QDBFJORN 379 382B 0 offset to jrn D section D* Journal Section, Qdbfjoal. D QDBFEVID 383 386B 0 initial # distinct D values an encoded D vector AP allowed D QDBRSV28 387 400 reserved D*The FDT header ends here. D*Journal Section D*This section can be located with the offset Qdbfjorn, which is located in the FDT heade DQDBQ40 DS jrn section D QDBFOJRN 1 10 jrn nam D QDBFOLIB 11 20 jrn lib nam D*QDBFOJPT 1 D QDBBITS41 21 21 jrn options flags D* QDBR10600 1 BIT D* QDBFJBIM00 1 BIT D* QDBFJAIM00 1 BIT D* QDBR10700 1 BIT D* QDBFJOMT00 1 BIT D* QDBR10800 3 BITS D QDBFJACT 22 22 jrn options D* ’0’ = The file is not being journaled D* ’1’ = The file is being journaled D QDBFLJRN 23 35 last jrn-ing date D QDBR105 36 64 reserved D* Structures for QDBRTVFD D* Input structure for QDBRTVFD API header section DQDBRIP DS Qdb Rfd Input Parms D*QDBRV 1 1 varying length D QDBLORV 2 5B 0 Len. o rcvr var D QDBRFAL 6 25 Ret’d file & lib D QDBFN00 26 33 Format name D QDBFALN 34 53 File & lib name D QDBRFN00 54 63 Recd fmt name D QDBFILOF 64 64 File override flag D QDBYSTEM 65 74 System D QDBFT 75 84 Format type D*QDBEC 85 85 varying length D* Retrieve member information structure D*Type Definition for the MBRL0100 format of the userspace in the QUSLMBR API DQUSL010000 DS BASED(mbrlstptr) D QUSMN00 1 10 Member name D*Record structure for QUSRMBRD MBRD0200 format DQUSM0200 DS 4096 D QUSBRTN03 1 4B 0 Bytes Returned D QUSBAVL04 5 8B 0 Bytes Available D QUSDFILN00 9 18 Db File Name Sample program to calculate journal size requirement 159 D QUSDFILL00 19 28 Db File Lib D QUSMN03 29 38 Member Name D QUSFILA01 39 48 File Attr D QUSST01 49 58 Src Type D QUSCD03 59 71 Crt Date D QUSSCD 72 84 Src Change Date D QUSTD04 85 134 Text Desc D QUSSFIL01 135 135 Src File D QUSEFIL 136 136 Ext File D QUSLFIL 137 137 Log File D QUSOS 138 138 Odp Share D QUSERVED12 139 140 Reserved D QUSNBRCR 141 144B 0 Num Cur Rec D QUSNBRDR 145 148B 0 Num Dlt Rec D QUSDSS 149 152B 0 Dat Spc Size D QUSAPS 153 156B 0 Acc Pth Size D QUSNBRDM 157 160B 0 Num Dat Mbr D QUSCD04 161 173 Change Date D QUSSD 174 186 Save Date D QUSRD 187 199 Rest Date D QUSED 200 212 Exp Date D QUSNDU 213 216B 0 Nbr Days Used D QUSDLU 217 223 Date Lst Used D QUSURD 224 230 Use Reset Date D QUSRSV101 231 232 Reserved1 D QUSDSSM 233 236B 0 Data Spc Sz Mlt D QUSAPSM 237 240B 0 Acc Pth Sz Mlt D QUSMTC 241 244B 0 Member Text Ccsid D QUSOAI 245 248B 0 Offset Add Info D QUSLAI 249 252B 0 Length Add Info D QUSNCRU 253 256U 0 Num Cur Rec U D QUSNDRU 257 260U 0 Num Dlt Rec U D QUSRSV203 261 266 Reserved2 D* Record structure for data space activity statistics DQUSQD DS D QUSNBRAO 1 8I 0 Num Act Ops D QUSNBRDO 9 16I 0 Num Deact Ops D QUSNBRIO 17 24I 0 Num Ins Ops D QUSNBRUO 25 32I 0 Num Upd Ops D QUSNBRDO00 33 40I 0 Num Del Ops D QUSNBRRO00 41 48I 0 Num Reset Ops D QUSNBRCO 49 56I 0 Num Cpy Ops D QUSNBRRO01 57 64I 0 Num Reorg Ops D QUSNAPBO 65 72I 0 Num APBld Ops D QUSNBRLO 73 80I 0 Num Lrd Ops D QUSNBRPO 81 88I 0 Num Prd Ops D QUSNBRRK 89 96I 0 Num Rej Ksel D QUSNRNK 97 104I 0 Num Rej NKsel D QUSNRGB 105 112I 0 Num Rej Grp By D QUSNBRIV 113 116U 0 Num Index Val D QUSNBRII 117 120U 0 Num Index Ival D QUSVDS 121 124U 0 Var Data Size D QUSRSV107 125 192 Reserved 1 C* Set things up C EXSR INIT C* Start mainline process C* set pointer to first object C EVAL objlstptr = objspcptr pt to b1 of usrsp C EVAL objlstptr = %addr(arr(OUSOLD + 1)) pt to entry 1 C EVAL numobjs = OUSNBRLE C* process all entries C DO numobjs C EVAL libn = qusolnu00 C EVAL filn = QUSOBJNU00 C IF QUSEOA = ’PF ’ only PF types C EVAL QDBFALN = filn + libn C CALL ’QDBRTVFD’ get full details C parm QDBQ25 C parm 4096 QDBLORV C PARM QDBRFAL C parm ’FILD0100’ QDBFN00 C parm QDBFALN C parm ’*FIRST ’ QDBRFN00 C parm ’0’ QDBFILOF C parm ’*LCL ’ QDBYSTEM C parm ’*EXT ’ QDBFT C parm QUSEC C IF QUSBAVL > 0 160 High Availability on the AS/400 System: A System Manager’s Guide C MOVEL ’QDBRTVFD’ APINAM 10 C EXSR APIERR C END C* Have FD info, test for SRC versus DTA C testb ’4’ QDBBITS1 10 11 10 on = DTA C* 11 on = SRC C *in10 ifeq *on is a data file C setoff 10 C* is the file already journaled? C QDBFJORN ifgt 0 C* yes, get jrn info C eval QDBQ40 = %SUBST ( QDBQ25 C : QDBFJORN + 1 C : %SIZE( QDBQ40 )) C eval jrnnam = QDBFOJRN C eval jrnlib = QDBFOLIB C if QDBFJACT = ’0’ C eval jrnact = ’N’ C end C if QDBFJACT = ’1’ C eval jrnact = ’Y’ C end C else C eval jrnnam = *blanks C eval jrnlib = *blanks C eval jrnact = ’ ’ C end C* now get member list C EVAL spc_name = mbrspcnam C CALL ’QUSLMBR’ C parm spc_name C parm ’MBRL0100’ mbr_fmt 8 C parm QDBFALN C parm ’*ALL ’ mbr_nam 10 C parm ’0’ mbr_ovr 1 C parm QUSEC C IF QUSBAVL > 0 Any errors? C MOVEL ’QUSLMBR’ APINAM C EXSR APIERR C END C* resolve pointer C CALL ’QUSPTRUS’ C PARM SPC_NAME C PARM MBRSPCPTR C PARM QUSEC C* Check for errors on QUSPTRUS C QUSBAVL IFGT 0 C MOVEL ’QUSPTRUS’ APINAM 10 C EXSR APIERR C END C EVAL mbrlstptr = mbrspcptr C EVAL mbrlstptr = %addr(arrm(MUSOLD + 1)) C EVAL nummbrs = MUSNBRLE C DO nummbrs C EVAL mbrn = QUSMN00 C EVAL QDBFALN = FILN + LIBN C CALL ’QUSRMBRD’ C PARM QUSM0200 C PARM QMBRDOVR C parm ’MBRD0200’ QMBRFMT C parm QDBFALN C parm QUSL010000 C parm ’0’ QMBROVR C parm QUSEC C IF QUSBAVL > 0 C MOVEL ’QUSRMBRD’ APINAM 10 C EXSR APIERR C END C eval QUSQD = %SUBST ( QUSM0200 C : QUSOAI + 1 C : QUSLAI ) C* have detail info, now create data C eval rcdlen = qdbfmxrl C* calc seconds since IPL C timestamp subdur ipltim runsec:*S 10 0 C* calc ave ops per sec C QUSNBRIO add QUSNBRUO rcdops C eval rcdops = rcdops + QUSNBRDO Sample program to calculate journal size requirement 161 C QUSNBRRO00 add QUSNBRRO01 mbrops C rcdops IFGT 0 C mbrops ORGT 0 C rcdops div runsec avrrcdops C mbrops div runsec avrmbrops C avrrcdops add avrmbrops jrnsec C rcdlen add 155 jrnsiz C eval jrnsiz = (jrnsiz * jrnsec * 86400) / 1048576 C END C EXSR dodetail C eval avrrcdops = 0 C eval avrmbrops = 0 C eval jrnsec = 0 C eval jrnsiz = 0 C EVAL mbrlstptr = %addr(arrm(MUSSEE + 1)) incr to next ent C END C END C END C EVAL objlstptr = %addr(arr(OUSSEE + 1)) C END C* End mainline process C EXSR DONE C* * * Subroutines follow * * * C* INIT subroutine C INIT BEGSR C OPEN QPRINT C exsr wrthead C z-add 16 qusbprv set err code struct C to omit exceptions C* Does user space exist for OBJECT list? C eval spc_name = objspcnam C eval ext_attr = ’QUSLOBJ ’ C EXSR USRSPC C* Does user space exist for MEMBER list? C eval spc_name = mbrspcnam C eval ext_attr = ’QUSLMBR ’ C EXSR USRSPC C* Retrieve last IPL time derived from previous step C *DTAARA DEFINE LASTIPL LASTIPL 17 C IN LASTIPL C move iplyr yripl 2 0 C move iplyr yyipl C move iplmo mmipl C move iplda ddipl C move iplhr hhipl C move iplmi nnipl C move iplse ssipl C IF yripl > 88 C move ’19’ ccipl C ELSE C move ’20’ ccipl C END C MOVEL TIMIPL IPLTIM C* Fill the user space with object list C eval spc_name = objspcnam C call ’QUSLOBJ’ C parm spc_name C parm ’OBJL0200’ fmtnam 8 C parm objtolist C parm ’*FILE ’ objtype 10 C parm QUSEC C* Any errors? C IF QUSBAVL > 0 C MOVEL ’QUSLOBJ’ APINAM C EXSR APIERR C END C* Get a resolved pointer to the user space C CALL ’QUSPTRUS’ C PARM SPC_NAME C PARM OBJSPCPTR C PARM QUSEC C* Check for errors on QUSPTRUS C QUSBAVL IFGT 0 C MOVEL ’QUSPTRUS’ APINAM 10 C EXSR APIERR C END C ENDSR C* 162 High Availability on the AS/400 System: A System Manager’s Guide C* USRSPC subroutine C USRSPC BEGSR C* Verify user space exists C CALL ’QUSROBJD’ C PARM RCVVAR C PARM RCVVARSIZ C PARM ’OBJD0100’ ROBJD_FMT 8 C PARM SPC_NAME C PARM ’*USRSPC’ SPC_TYPE 10 C PARM QUSEC C* Errors on QUSROBJD? C IF QUSBAVL > 0 C IF QUSEI = ’CPF9801’ user space not foun C CALL ’QUSCRTUS’ create the space C PARM SPC_NAME C PARM EXT_ATTR 10 C PARM SPC_SIZE C PARM SPC_INIT C PARM ’*ALL’ SPC_AUT 10 C PARM *BLANKS SPC_TEXT 50 C PARM ’*YES’ SPC_REPLAC 10 C PARM QUSEC C PARM ’*USER’ SPC_DOMAIN 10 C* Errors on QUSCRTUS? C IF QUSBAVL > 0 C MOVEL ’QUSCRTUS’ APINAM 10 C EXSR APIERR C END C* else error occurred accessing the user space C ELSE C MOVEL ’QUSROBJD’ APINAM 10 C EXSR APIERR C END C END C ENDSR C* APIERR subroutine C APIERR BEGSR C* If first error found, then open QPRINT *PRTF C IF NOT %OPEN(QPRINT) C OPEN QPRINT C ENDIF C* Print the error and the API that received the error C EXCEPT BAD_NEWS C EXSR DONE C ENDSR C* DONE subroutine C DONE BEGSR C WRITE TTLSEP C WRITE TTLS C EVAL *INLR = ’1’ C RETURN C ENDSR C* WRTHEAD subroutine C wrthead begsr C WRITE AFT1 C WRITE AFT2 C WRITE AFT3 C WRITE AFT4 C ENDSR C* DODETAIL subroutine C DODETAIL BEGSR C IF *IN90 = *ON C EXSR WRTHEAD C EVAL *IN90 = *OFF C EVAL lstlib = *blanks C END C* C IF LSTLIB <> LIBN C WRITE AFMBR C EVAL LSTLIB = LIBN C GOTO GETOUT C END C IF LSTFIL <> FILN C WRITE AFNOLIB C EVAL LSTFIL = FILN C goto GETOUT C END C WRITE AFNOFIL Sample program to calculate journal size requirement 163 C GETOUT TAG C add jrnsec tjrnsec C add jrnsiz tjrnsiz C ENDSR OQPRINT E BAD_NEWS 1 O ’Failed in API ’ O APINAM O ’with error ’ O QUSEI D.3 Externally described printer file: PFILRPT A*%%*********************************************************************** A*%%TS RD 20010115 123614 SMBAKER REL-V4R4M0 5769-PW1 A*%%FI+10660100000000000000000000000000000000000000000000000000 A*%%FI 0000000000000000000000000000000000000000000000000 A*%%*********************************************************************** A R AFT1 A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SKIPB(001) A SPACEA(001) A 3 A DATE(*YY) A EDTWRD(’0/ / ’) A 86 A ’Journal size estimate’ A 170 A ’Page: ’ A +0 A PAGNBR A*%%*********************************************************************** A*%%SS A*%%CL 001 A*%%*********************************************************************** A R AFT2 A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A 80 A ’---------Record---------’ A 107 A ’--------Member--------’ A 131 A ’-----Journal Estimate-----’ A*%%*********************************************************************** A*%%SS A*%%*********************************************************************** A R AFT3 A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A 42 A ’Record’ A +3 A ’----------Journal----------’ A 82 A ’Number of’ A 97 A ’Average’ A 107 A ’Number of’ A 122 A ’Average’ A 131 A ’Average ops’ A 151 A ’MB per’ A*%%*********************************************************************** A*%%SS A*%%*********************************************************************** A R AFT4 A*%%*********************************************************************** 164 High Availability on the AS/400 System: A System Manager’s Guide A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A SPACEA(001) A 3 A ’Library’ A 16 A ’File’ A 29 A ’Member’ A 42 A ’Length’ A +3 A ’Name’ A +8 A ’Library’ A +5 A ’Act’ A 81 A ’Operations’ A 94 A ’per second’ A 107 A ’Operations’ A 119 A ’per second’ A 132 A ’per second’ A 154 A ’day’ A*%%*********************************************************************** A*%%SS A*%%CL 001 A*%%*********************************************************************** A R AFMBR A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A LIBN 10A O 3 A FILN 10A O 16 A MBRN 10A O 29 A RCDLEN 5S 0O 43 A EDTCDE(3) A JRNNAM 10A O +3 A JRNLIB 10A O +2 A JRNACT 1A O +3 A RCDOPS 11S 0O 80 A EDTCDE(3) A AVRRCDOPS 7S 1O 96 A EDTCDE(3) A MBROPS 10S 0O 107 A EDTCDE(3) A AVRMBROPS 7S 1O 121 A EDTCDE(3) A JRNSEC 11S 1O 131 A EDTCDE(3) A JRNSIZ 10S 3O 146 A EDTCDE(3) A*%%*********************************************************************** A*%%SS A*%%SN JRNACT x A*%%*********************************************************************** A R AFNOLIB A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A FILN 10A O 16 A MBRN 10A O 29 A RCDLEN 5S 0O 43 A EDTCDE(Z) A JRNNAM 10A O +3 A JRNLIB 10A O +2 A JRNACT 1A O +3 A RCDOPS 11S 0O 80 A EDTCDE(3) A AVRRCDOPS 7S 1O 96 Sample program to calculate journal size requirement 165 A EDTCDE(3) A MBROPS 10S 0O 107 A EDTCDE(3) A AVRMBROPS 7S 1O 121 A EDTCDE(3) A JRNSEC 11S 1O 131 A EDTCDE(3) A JRNSIZ 10S 3O 146 A EDTCDE(3) A*%%*********************************************************************** A*%%SS A*%%SN JRNACT x A*%%*********************************************************************** A R AFNOFIL A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A MBRN 10A O 29 A RCDLEN 5S 0O 43 A EDTCDE(Z) A JRNNAM 10A O +3 A JRNLIB 10A O +2 A JRNACT 1A O +3 A RCDOPS 11S 0O 80 A EDTCDE(3) A AVRRCDOPS 7S 1O 96 A EDTCDE(3) A MBROPS 10S 0O 107 A EDTCDE(3) A AVRMBROPS 7S 1O 121 A EDTCDE(3) A JRNSEC 11S 1O 131 A EDTCDE(3) A JRNSIZ 10S 3O 146 A EDTCDE(3) A*%%*********************************************************************** A*%%SS A*%%SN JRNACT x A*%%*********************************************************************** A R TTLSEP A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A 129 A ’--------------’ A 146 A ’-----------’ A*%%*********************************************************************** A*%%SS A*%%*********************************************************************** A R TTLS A*%%*********************************************************************** A*%%RI 00000 A*%%*********************************************************************** A SPACEB(001) A TJRNSEC 11S 1O 131 A EDTCDE(3) A TJRNSIZ 10S 3O 146 A EDTCDE(3) A*%%*********************************************************************** A*%%SS A*%%CP+999CRTPRTF A*%%CP+ FILE(ESTJRNSIZ/PFILRPT) A*%%CP+ DEVTYPE(*SCS) A*%%CP PAGESIZE(*N 192 *N ) A*%%CS+999CRTPRTF A*%%CS+ FILE(QTEMP/QPRDRPT ) A*%%CS+ DEVTYPE(*SCS) A*%%CS PAGESIZE(*N 132 *N ) A*%%*********************************************************************** 166 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 167 Appendix E. Comparing availability options This appendix can help you compare availability options so that you can make decisions about what to protect and how. Journaling, mirroring, and device parity protection are compared by the extent of data loss, recovery time, and performance impact. Recovery time by failure type, and availability options by failure type are identified. E.1 Journaling, mirroring, and device parity protection Table 4 compares several important attributes of journaling physical files, mirrored protection, and device parity protection, including how they affect performance, the extent of loss, and recovery time. Table 4. Journaling physical files, mirrored protection, and device parity attribute comparison E.2 Availability options by time to recover Table 5 shows which availability options can reduce the time needed to recover from a failure. The number of plus (+) signs in a column indicates an option’s impact on the time to recover compared to the other options. An option with more pluses has greater relative impact. Table 5. Availability options by time to recover Attribute Physical file journaling Mirrored protection Device parity protection Data loss after a single disk failure Loss of file data is determined by currency of backup None None Recovery time after a single disk unit failure Potentially many hours None to a few hours None to a few hours Performance impact Minimal to significant Minimal, except some read operations improve Minimal, except restore operations degrade Option DASD System Power loss Program failure Site loss Save operation + + + + + Journal files ++ ++ ++ + Access path protection ++ ++ ++ UPS +++ User ASPs ++ Device parity protection +++ Mirrored protection +++ Dual systems +++ + ++ 168 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 169 Appendix F. Cost components of a business case Creating an accurate business case for some IT applications is not trivial. This is certainly the case when justifying a high availability solution. Many of the benefits provided by high availability are intangible. To help you create a business case for improving the availability of an application, this appendix provides a list of costs (both for providing availability and those associated with outages) that can be used as part of that business case. Without a detailed study of your business, it is difficult to know whether an outage in your company will have a similar impact. Use this appendix as a guideline to justify the need for further study. F.1 Costs of availability The costs for providing an improvement in high availability are very intangible. The value of availability is much harder to ascertain. One of the first steps is to study the current availability statistics and understand which objective to improve. It is conceivable that a single faulty component, such as a local display, creates an invalid perception that the availability problem is within an application. Review sources of information, such as problem management reports, system logs, operator logs, and so on, to identify the outages over the past year. Verify this list with the application owner to ensure that you both have the same perception of the current availability. Next, identify and categorize the root cause for every outage, both planned and unplanned. From this list, identify the items with the highest impact on availability. It is only when you understand what is causing the application to be unavailable at a given moment that you can effectively create a plan to improve that area. Use this information to identify which causes of outages must be addressed to gain the availability improvement you desire. The plan is likely to include some change in processes, and it may also involve hardware and software changes. The following sections provide information on some of the contributing factors for costs. F.1.1 Hardware costs A component failure impact analysis can be done to identify the single point of failure and the components that, if lost, would have a serious impact on the application availability. The only way to provide continuous operations is to have redundancy for all critical components. Take the result of that study and discuss the results with the sponsors. There may be some identified components that are addressed as an expected upgrade process. Size and price the remaining components. At a minimum, consider the console hardware component. 170 High Availability on the AS/400 System: A System Manager’s Guide F.1.2 Software Just as redundant hardware is required to provide for continuous operations, redundant software is also required. Additional licenses of some programs may be required. Does the current application need to be updated to support the high availability solution? Is this the time to add this application to help manage operations and availability? Are additional licenses required? When evaluating costs, consider: • Application change control • Change and problem management • Utilities F.2 Value of availability The value of availability (or the cost of unavailability) is more difficult than arriving at the cost or providing that availability. For example, if you lose a network controller, what is the cost impact of the loss? This depends on many things: • Which applications do the users in the affected area use? If the application developers access a test system, the cost will be lower. • Which shift did the outage occur? What time? • How long did it take to recover? • How do you report availability? F.2.1 Lost business The amount of business lost because of an outage can vary from individual transactions, to the actual loss of a customer. If the amount of business transacted by your application is consistent, compare the average value of business with the amount of business transacted on the day an application outage occurred. It is difficult to tie the loss of a customer to an application outage. If you have a relatively small number of high-value customers, you probably have a close relationship with them and they may make you aware of why they moved their business elsewhere. If you are in the retail industry, it is unlikely that you can produce a definite figure for the number of customers lost due to application unavailability. One possible method, however, is to follow a series of application outages and determine if it was followed by a trend in the amount of repeat business. Either of these analysis methods require working with the application owner to obtain and record the required information. It is likely that a single outage may result in lost transactions, where a series of outages may result in lost customers. Therefore, the cost of each outage is also impacted by the frequency of outages. Cost components of a business case 171 F.3 Image and publicity As businesses become more computerized and visible on the Internet, electronic links between supplies and customers are becoming a standard. The availability of applications becomes more visible to those outside the company. With a click of the mouse, customers can go on to another source for their goods or services. Recurring application outages are known quickly by customers (existing and potential). This impacts your validity for winning new contracts or renewing existing ones. Poor availability leads to bad publicity, which is very difficult to rectify. To make matters worse, potential customers can be anywhere in the world. You should modify your view of the outage impact to reflect this. It is nearly impossible to assess the cost of poor publicity caused by poor availability. If you have a public relations department, they may be aware of existing negative publicity and should have some idea of the cost to improve the public perception of the company. F.4 Fines and penalties Fines and penalties imposed as a result of application unavailability is an objective number to obtain. In some industries, application availability is monitored by controlling bodies. Companies are expected to maintain a certain level of availability, for example, the airline industry. There are also moves within the financial world to encourage companies to maintain high levels of availability. Extended outages can lead to fines from the governing bodies. F.5 Staff costs There may be significant staff costs both during and after an outage. Depending on the application affected, prolonged outages can have a significant financial cost in lost productivity. For example, for an application controlling a factory production line, there could be many people sitting around not able to do their job. Overtime may need to be paid to catch up on target productivity. Identify the users of a given application, estimate the impact of an application outage on those people, and then multiply by their average salary per hour to provide a rough idea of the cost in terms of lost productivity. Factor in the overtime rate if lost productivity is made up by overtime to give a rough recovery cost. Add the cost of the IT staff involved in recovering from the outage, and factor in any additional availability hours that may be required by the end users to catch up on their work. F.6 Impact on business decisions Depending on the type of application, the cost of an application outage varies considerably. This depends on how timely the information must be and the type of data involved. The loss of a business intelligence application varies from very 172 High Availability on the AS/400 System: A System Manager’s Guide small to very large. For order processing in a factory working on just-in-time, the impact can be significant. If items are not ordered in time, the whole factory could halt production due to the lack of one key component. Work with the application owners to identify the impact of the application unavailability for a given amount of time. Identify both a worst and best case scenario, and cost of reach. Then, agree on realistic average cost. F.7 Source of information To obtain some data to help in these calculations, check these sources: • Application: If a business case was created for the application when it was first developed, there may be some cost benefit estimates in that document that are useful. Factor in the age of the document, however, to determine the applicability of the figures. • Disaster recovery: If your company has a disaster recovery agreement, it is likely that a business case had to be presented in relation to that expenditure. It should contain estimates of the impact of a system outage. Additionally, when the business case includes the application or system level, use those figures or extrapolate an idea of the financial cost of the loss of a given application. Speak to the developers of the disaster recovery business case to see where they obtained the financial impact information used to build the case. • Transaction values: Transactions for an average day can be recorded and used to assess the impact of an outage for an application. Record the number of transactions per day for each application. You can at least identify the change in the number of transactions if an outage occurs. If you can agree on an average value per transaction, this allows you to more readily estimate the financial impact of lost transactions. • Industry surveys: Data processing firms and consultants have produced studies over the years, with example costs for outages. Their reports can include a cost range per hour and an average cost per hour. F.8 Summary Every industry and every company has unique costs and requirements. Tangible outage costs are only one part of the equation. You may be fortunate enough to suffer no unplanned outages, and yet still require better application availability. To maintain competitiveness, if your competitors are offering 24 x 7 online service, you may have no choice but to move in that direction. This may be the case even if there is currently not enough business outside the normal business hours to justify the change by itself. Additional availability can give you access to a set of customers you currently do not address (for example, people on shift work who do their business in the middle of the night). Internet shopping introduces a completely new pattern of consumer behavior. Previously, once a customer was in your shop, they were more inclined to wait ten minutes for the system to come back than to get back in the car and drive even minutes to your competitor. Now shopping is done by clicking a computer mouse. If you have better availability than your competitors, you have the opportunity to pick up customers from competing sites while their system is down. Cost components of a business case 173 Some retail outlets switch to a credit authorization business when the first provider experiences any interruption. They switch to another provider once the next interruption happens. If your systems have better availability, you have the opportunity to pick up a competitor’s business when they experience an outage. If your customer support systems are available 24 x 7, you have flexibility to have fewer call center staff. Once your customers realize they can get service any time, the trend tend to favor fewer calls during the day with more in the off-peak hours. This allows you to reduce the number of operators required to answer the volume of calls at a given time. If you can spread the workload associated with serving your customers over a longer period of time, the peak processing power required to service that workload decreases. This can defer an additional expense to upgrade your system. Due to mergers or business growth, you may be required to support multiple time zones. As the number of zones you support increases, the number and duration of acceptable outage times rapidly decreases. A successful full business case includes these considerations, and others that are more specific to your company and circumstances. Most importantly, a successful availability project requires total commitment on the part of management and staff to work together towards a common goal. 174 High Availability on the AS/400 System: A System Manager’s Guide © Copyright IBM Corp. 2001 175 Appendix G. End-to-end checklist This chapter provides a guide to the tasks and considerations needed when planning a new high availability solution. It is not a definitive list and looks different for each customer, depending on the particular customer situation and their business requirements. Use it as a guide to help you consider factors influencing the success of a high availability solution. Note: A service offering is available from IBM to examine and recommend improvements for availability. Contact your IBM marketing representative for further information. G.1 Business plan The investment in a high availability solution is considerable. It is critical that this investment is reflected back to the Business Plan. This will ease the justification of the solution, and in the process will display the value of the information technology solution to the business. Does a valid business plan exist? • Tactical Plan Do you have a tactical plan? • Strategic Plan Do you have a strategic plan? G.1.1 Business operating hours Define the current operating hours of all parts of the business, no matter how insignificant. Suppliers and customers should also be included in this information. How long can the customer business survive in the extent of a systems failure? Describe the survivability of the various different parts of the business, and rank their criticality. • Business operating hours: – Current normal operating hours – Operating hours by application/geography – Planned extensions to operating hours – Extensions by application/geography • Business processes: Do business processes exist for the following areas: – Information Systems Standards – Geographic standards – Centralized or local support – Language – Help desk – Operating systems – Applications 176 High Availability on the AS/400 System: A System Manager’s Guide G.2 High availability project planning Major points for a successful project plan are: • Objective of the project: Prepare an accurate and simple definition of the project and its goals. • Scope of the project: Define the scope of the project. This definition will more than likely be broken down to several major sub-projects. • Resources: Are there sufficient resources to manage and facilitate the project? • Sponsorship: Is there an Executive Management sponsor for the project? • Communication: – Does the project have an effective communications media for the project? – Communications to the business, sponsors, and customers – Communications and collaboration of parties working on the project? • Cost management: – Is there a budget for the project? – Has the budget been established based on cost of outage? G.3 Resources Soft resources are a critical success factor. • Current skills: Do you have an accurate skills list? Do you have job specifications for the current skills? • Required skills: What skills are required for the new operation? • Critical skills: Do all these skills exist? Do you have a job specification for all critical skills? • Skills retention: – Vitality Are existing resources encouraged to maintain their skills currency? Is there a skills development plan for existing resources? – Loss from the department • Are you likely to lose resources as result of the project? • Are these critical skills? • Through lack of interest in new mode of operation? • Through learning skills valuable outside the business? G.4 Facilities Are the facilities listed in this sections available? • Testing the recovery plan – Do you have tested recovery plan? – Are there any activities planned during the project that could impact it • Types of planned and unplanned activities? End-to-end checklist 177 G.4.1 Power supply Ask the following questions about your power supply: • Reliability: – Is the local power supply reliable? – How many outages in the past year? – How many weather related? • Quality: Is the local power supply company committed to maintaining a quality service? • Power switchover: Do you require power switchover? • UPS: – Are there any UPSs in the installation? – Do they qualify with the particular power supply options? – Do they have the capacity to meet the new demands? • Servers: – Is a UPS required for the Servers? – How many? – What type (battery/generator)/size/duration of backup supply/location? – Are the systems aware of power failure? – Do programs need to be developed to allow the systems to reaction to power failures? • Clients: – Do clients need to have UPS? – What type (battery/generator)/size/duration of backup supply/location? • Other equipment: – Is there other equipment that needs UPS? – Consoles/Printers/PABX/Network Devices/Machine facilities? • Generated supply: – Will you be running a generated backup? – What configuration will be running? – Local power - generated backup – Generated power - local backup G.4.2 Machine rooms Ask the following questions regarding machine rooms. • Flooring: – What is the current standard for your machine room? – Fully accessible or semi-accessible • Fire Protection: – What is the current fire protection method? – Halon/Water/CO2 • Air-conditioning: – Is the machine room air-conditioned? – At what point will the equipment fail without air-conditioning? – Will the machine room ever achieve this condition with-out air-conditioning? 178 High Availability on the AS/400 System: A System Manager’s Guide – Is there spare capacity in the air-conditioning system? – Is there redundancy in the current air-conditioning? • Contracts: – Do contracts and services levels exist for machine rooms? – Do these need amending for the new system? G.4.3 Office building Ask the following questions regarding your office building: • Workstations: – Have the workstations been assessed for their ergonomic function? – Will the changes to the current system effect your liability for your employees ergonomics? • Cabling systems: – Does the facility have a structured cabling system? – Will the existing cabling system accommodate the new system? • PABX: – Is the current PABX capable of operating in the new environment? – Does the existing PABX have redundancy? – Is redundancy in the PABX a requirement? G.4.4 Multiple sites Answer the following questions regarding multiple sites. • Are there multiple local sites (within the same site)? • Are any of the local sites located across a public space? (highway, area) • Are any remote sites less than 2km away? • Are any remote greater than 2km away? • Are any sites across an international border? • Do the remote sites have a different PTT? G.5 Security In this section, answer the following questions regarding security: • Policy: – Does a detailed security policy exist? – Is there a site security function? – Does this function set policy for I/T security? • Physical security: – Is access protection provided? – Is there a documented process for dismissal employees? – Does this process also limit risk to systems prior to dismissal? • Office space: Is the office space protected? End-to-end checklist 179 • Machine room: – Is the machine room protected? – Does the protection system provide an audit trail? • Site: – Does the site have physical security? – Will the new system require this to be updated? • Operating system security: – Do the systems implement security? – What level is implemented? – What level is required? – What applications at what level? • AS/400 security levels: – What level of AS/400 security is implemented? – What level is required? • Personal computer security: – What forms of PC security have been implemented? – Do these levels need changing? • Remote users: – What type of remote security has been implemented? – How do remote users access the systems? – Intranet? – Internet? • Network security: What network security is in place? • Printing: – Are there secure printing requirements? – Are the printers producing secure output located in a controlled environment? G.6 Systems in current use Document the model, processor feature, interactive feature, main storage, DASD capacity, DASD arms, protection, and ASP. G.6.1 Hardware inventory Prepare the inventory list of the all the hardware components in your current system, including: • Processors • DASD subsystems: DASD Space available • Mirroring • RAID-5 • Third Party solutions 180 High Availability on the AS/400 System: A System Manager’s Guide • Tape subsystems – Multiple tapes subsystems versus tape library – Data transfer rate • IOPs and Towers G.6.2 Redundancy What level on redundancy are you planning? • System (cluster) • Tower • Bus • I/O processor • Controller • Adapter G.6.3 LPAR • Are you planning to use LPAR support for the project? • Transition support with LPAR? • Server consolidation support with LPAR? G.6.4 Backup strategy Even with a continuously available solution backups are necessary. Is there a new backup strategy in plan? Does this plan include the following components? • Media Management • Media storage • Automated backup • Performance • Save-while-active • Incremental backups • Disaster recovery backup G.6.5 Operating systems version by system in use • Interactive CPU, batch CPU, DASD utilization, network utilization LAN/WAN. • Are there multiple operating systems versions in this plan? • Do you plan to rationalize these into the same version? • Is there a plan for bringing systems to the same version G.6.6 Operating system maintenance • Servers: Do you have maintenance contracts for your server operating systems? • Clients: Do you have maintenance agreements in place for your client software? G.6.7 Printers • Do you have a list of your printer inventory? • Are the printers supported with a maintenance agreement? • Will the printers support the new environment? End-to-end checklist 181 G.7 Applications in current use Inventory of server based applications • Application provider • Version • Number of users • Database size in MB • Data transfer requirements Inventory of client based applications • Application provider • Version • Number of users • Database size in MB • Data transfer requirements Application maintenance • Support contracts – Do support contract exist for all applications? – What is the support level? – Are there react time guarantees? – Does this meet the new availability requirements? • Frequency of update – Does the application have updates? – What is the frequency of update? G.7.1 Application operational hours current Ask youself the following questions: • By application what are the operational hours? • Are there any processing peak periods that extend these operating hours? Application processing peaks. Application processing map An application processing map shows the time plan of when various applications run on both server and client machines. It shows the interleaving of applications and should identify any periods that have an opportunity for very high or very low requirement. • Interactive • Batch • During the day • Day end • Month end • quarter end • year end • fiscal year end Application operational hours requirement: 5x8, 7x24, 24x365 What are the assessed operational requirements of each application? 182 High Availability on the AS/400 System: A System Manager’s Guide Growth information; applications, database, users What is the growth information for each application and its associated database and users? Splitting an application Is the splitting of the application across multiple machines a consideration? © Copyright IBM Corp. 2001 183 Appendix H. Special notices This publication is intended to help customers, business partners, and IBMers who are looking to implement a high availability solution for their AS/400 system. It provides planning checklists and background information to understand the facets of planning for a high availability solution, implementing a high availability solution, and managing the project installation itself. The information in this publication is not intended as the specification of any programming interfaces that are provided by OS/400. See the PUBLICATIONS section of the IBM Programming Announcement for OS/400 for more information about what publications are considered to be product documentation. References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service. Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The information about non-IBM ("vendor") products in this manual has been supplied by the vendor and IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating 184 High Availability on the AS/400 System: A System Manager’s Guide environments may vary significantly. Users of this document should verify the applicable data for their specific environment. This document contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples contain the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Reference to PTF numbers that have not been released through the normal distribution process does not imply general availability. The purpose of including these reference numbers is to alert IBM customers to specific information relative to the implementation of the PTF when it becomes available to each customer according to the normal IBM PTF distribution process. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: The following terms are trademarks of other companies: C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. (For a complete list of Intel trademarks, see http://www.intel.com/tradmarx.htm) UNIX is a registered trademark in the United States and/or other countries licensed exclusively through X/Open Company Limited. SET and the SET logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others. e (logo)® IBM  Redbooks Redbooks Logo APPN AS/400 AS/400e AT CT Current DataJoiner DataPropagator DB2 DRDA Netfinity Nways OS/2 OS/400 RPG/400 SAA SP System/36 System/38 XT 400 Lotus Lotus Notes Notes Tivoli