SAP TAO Initial Configuration - Potential Issues


Just a couple of things I've picked up along the way connectivity-wise with TAO, particularly in the past few months as we copy clients back from Production to Test and upgrade TAO onto Windows 7:

SAP TAO connects to SAP in two ways:  with the Solution Manager system to check and confirm the license, and with the Managed System for test setup and execution.



License setup:

If the Configuration dialog doesn't pop up automatically on first login to TAO, select it through the hyperlink at the top right of the application screen.

Follow the setup through the tabs using the TAO Admin Guide.

For the License tab, select your correct instance of Solution Manager by selecting the correct system and entering the Client, User and Password details. Select "Test SAP Connection" to check that you can connect to the instance of SolMan. If successful, select "Check for License" to allow TAO to verify the license.



Connecting to Managed System:

In the main screen of SAP TAO select Connect from the right hand navigation menu bar.  Select the SAP Managed System, enter the Client, User and Password information, and select "Test SAP Connection". If successful you will receive the message "Can connect to system xxx".


Connection Issues:

You may receive an error message such as "Connection status to back end system is: NotConnected. RFC eError Connect to message server failed".

The SAP system generates technical users for the RFC connections between SAP TAO, Solution Manager, and the managed system. The role required by the user which is connected to SAP Solution Manager from SAP TAO is SAP_SM_TAO_RFC and the role required for the user connected to the managed system from SAP TAO is SAP_TAO_AGENT_RFC (please note additional roles are required for TBOM creation).

To verify that these roles exist in the target systems, use transaction SE16 and select table name AGR_1251. In the resulting selection search for AGR_NAME SAP_SM_TAO_RFC in Solution Manager, and SAP_TAO_AGENT_RFC in the managed system(s).

SAP TAO and HP QTP Installation Checklist

We've been through a number of changes and upgrades over the past six months or so, with TAO itself being upgraded, HP QC moving up to HP ALM, and within our business we've upgraded from XP to Windows 7.

Along the way I've needed to uninstall and reinstall TAO and QTP several times, and this checklist has been helpful.

SAP TAO and HP QTP Installation Checklist



Dialog Boxes

Dialog boxes are such simple things, yet it's so easy to confuse users with them.

The potential for confusing users in this one is so obvious, why hasn't it been caught during testing? Or, to give the testers the benefit of the doubt, why hasn't the defect been corrected in Production?


HP ALM – Create template Description in New Defect button only


I want to create a defect template in the Description field in HP ALM so that some of our newer users have reminders of what to enter into the bug report.

I already have important required info included on the defect template in mandatory drop-down lists (severity, priority, system, environment, etc) so the template is more to promote good behaviour in the descriptive field. Furthermore, I only want the template info to show up in the defect when the tester selects the New Defect button from the Defects module, and not when opening an existing defect or when creating a new defect from a test run.

It’s simple enough code and it’s not 100% clean (using ActionNames to ensure it’s the New Defect button that is triggering the New Defect dialog box would be better). Let me know if you have anything cleaner!


‘Code Start:
-------------------------------------
Sub Bug_New
On Error Resume Next

'Set template Description on new bug opened from Defect Module
If Bug_Fields("BG_DESCRIPTION").IsNull then
  sAfterDescription = "<html><body>"
        sAfterDescription = sAfterDescription + "<b>Transaction / Process / WRICEF ID being executed when error occurred:</b><br><br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Summary and steps to reproduce the problem:</b><br> <br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Relevant Data:</b><br>Customer(s), Material(s), Vendor(s), quantities, etc<br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Expected Results:</b><br><br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Actual Results:</b><br> <br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Immediate Workaround (if any):</b><br> <br><br><br><br>"
        sAfterDescription = sAfterDescription + "<b>Regression / Isolation Impact:</b><br> <br><br><br><br>"
        sAfterDescription = sAfterDescription + "</body></html>"
        Bug_Fields("BG_DESCRIPTION").Value = sAfterDescription
         End if

On Error GoTo 0
End Sub
---------------------------------------
‘End Code

HP ALM – Find the ActionName of the object and action


Looking to find the action that you want to ascribe code to?  Here’s a quick tip to get the correct ActionName of the buttonpress in HP ALM:

In the ALM Script Editor (Tools | Customize | Workflow | Script Editor) copy this function to the Defects module script:

Function ActionCanExecute(ActionName)
On Error Resume Next
msgbox "The ActionName is: " & ActionName
On Error GoTo 0
End Function


Save and exit the Script Editor and return to the main screen, selecting Major Change where prompted.

To test this we’ll go to the Defects module and press the [New Defect...] button. We get two messageboxes telling us what the ActionNames are (you’ll need to click through the first box to get the second one):





If you’re on a system being used by other testers, don’t forget to go back into Script Editor and delete or comment out your code. I’d recommend just commenting it out so that you can easily reuse it next time – be sure to add comments to the code so that other Admins know what it is there for.


HP ALM - Internet Explorer Protected Mode


A new user received the following error when trying to start HP ALM for the first time. Windows 7, Internet Explorer 8:

Internet Explorer is configured to run in Protected Mode. Deployment is Aborted.
Contact your system administrator. For details, see the Loader log file.



Select Tools | Internet Options | Security to enable or disable the Protected Mode:


Integration Testing Handover Sheets

I picked up a tip from a fellow tester a while ago and it's been serving us well in our last couple of deployments - handover sheets (aka runsheets).

All of our integration scenarios are in HP ALM, and given the modular nature of the scenarios we have folders for each activity in the scenario so that testers from the relevant stream can include their tests:



This layout is fine for modularisation, and we can report directly on overall test progress (total planned, executed, pass/fail) from ALM's reporting ability.

However, as we're running multiple test scenarios concurrently it did pose us with two problems - whereabouts are we with each scenario, and is there an easier way for each tester in the scenario to follow the document trail other than having to go into each preceding test to review the test results to glean the document numbers.

Enter the handover sheet. It's a paper sheet that summarises the scenario in business-speak, and the testers note down their material document numbers once they've completed their parts of the scenario, literally being handed over from tester to tester as each part of the scenario is completed.

It's an easy way to keep the trail of the scenario, and it's more visual than the document numbers hidden amongst the tests in ALM.  It's also very easy to simply hand over the finished sheet to anyone who wants to trace the scenario through the test client (auditors, etc).

Retesting vs Regression Testing


Nothing profound here, but I do occasionally get queried from new testers on the difference between retesting and regression testing. For “defect fixes” also read “change request releases” and any other types of changes that go into the test system.

Selected definitions used by standards organisations:
US National Institute of Standards and Technology:  Regression Testing. Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.

IEEE:  Regression Testing. Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements.

Oxfords Dictionary:  Retesting. Test (someone or something) again.


Retesting is carried out to verify the defect fix(es) by running the same test(s) that failed the first time, with the purpose of verifying that the defect fix has corrected the defect. Regression testing is carried out to check if the defect fix / fixes have not impacted any other functionality of the application that was working fine before applying the code changes.

Retesting is specific to the defect fixes being released. Regression testing is more generic and may not target the specific area of any defect fix or code change.

Retesting involves executing test cases that were failed earlier, and/or new test cases to specifically test the changed functionality. Regression testing involves executing test cases and/or scenarios that were passed in an earlier build i.e., functionality that was working prior to the transport release.

If the time available to test is tight, retesting usually takes higher priority over regression testing i.e., verify the known problem of the bug and fix first, and then regression test other parts of the application based on a risk assessment of the likelihood of the bug fix affecting existing that functionality. Depending on the size of your test team and their specialities, retesting and regression testing can be carried out in parallel.

Although retesting and regression testing have different objectives and priorities, they are both important and both must be considered in every environment that a transport is imported into.


Quality Center - restricting the status workflow

My old project team was very small (a handful of developers and myself as the sole tester). As we were a small team I removed the default defect status workflow for a bit of flexibility in reporting. However, I had one or two developers who persisted in setting the defect status to "Closed" on my behalf, risking that defects will pass through without retesting.

I needed to restrict this again. The requirement isn't just a workflow for going from New to Fixed to Closed, but to ensure that only the tester can set defects to "Closed".

The dev team are included in their own Group, so a quick and dirty way is to create a new Project List with the limited statuses available (excluding Closed) and add to the subroutine in the Script Editor:
Sub WizardBGStatusListCust
    If User.IsInGroup("TDAdmin", "Defects Manager") Then
       Bug_Fields("BG_STATUS").List = Lists("Bug Status")
    Else
       Bug_Fields("BG_STATUS").List = Lists("Bug_Status_Dev")
    End if
End Sub

Now everyone that is not in the TDAdmin or Defects Manager group will see the reduced list only ("Bug_Status_Dev").

However, they will also only see the restricted fields when filtering in the grid view in the Defects module. Not very helpful if a developer wants to filter and view all Closed defects.

A better solution is to restrict the Group status workflow, so that the users can see the Closed status but cannot change a defect to Closed.

In the Customization screen select Groups and then the group that requires changing (Defects Developer in my case). Select Change, then the Defects tab on the resulting pop-up. In the left-hand pane open the Modify Defect tree and highlight the Status field. With the Status field highlighted, the right-hand pane will display the transition rules section.

Select Add, and add in the transition rules for each status change. If a status can be changed to more than one possibility then all transitions will need to be entered separately.

Save and test the changes with a test account to ensure the status transitions are correct.

And don't forget to change the subroutine in the Script Editor back again  ;-)

How much testing is enough?

How many times have we tested something once and thought, "it passed [the test], that's enough".

Some tests do only need one pass initially - if a label is correct the first time it's likely that it will be correct the second time. However, more complicated applications - particularly data driven ones - will need several test passes (I'm using "pass" here in the context of how many times the tester will test the application or part of the application).

The first pass may be a quick test, designed to flush out any obvious defects that prevent further testing and get the development team onto the fixes quickly.

The second pass can build on the first, increasing the volume of data and testing the application more thoroughly. This may be after a new release following from the first test iteration, or it may be a continuation of the first pass.

Subsequent tests build on what we now know about the app. Focus the testing into the risk areas; build the datasets and diversify them so that new bugs can be found; use various techniques (exploratory testing, pairwise testing, etc) to find the chinks in the application's armour.

Quality Center - calculate the Closed date of a defect

Problem: I need to create a graph showing the trend in closure of defects. However, using the Last Modified date field isn't accurate - it records the Last Modified date (surprise!) rather than the date the defect was closed.

Investigation:  The History of a defect contains the change log, and I can see that the date that the defect was set to Closed is recorded somewhere in QC.
A check of the tables using the Excel Report Generator shows that the information required is held in the AUDIT_PROPERTIES table ( SELECT * FROM  AUDIT_PROPERTIES).
A closer look at an individual record ( SELECT * FROM  AUDIT_PROPERTIES WHERE AP_ACTION_ID = '11788') shows that there are three rows created when the record is changed: one for the change in Status from 'Fixed in Next Release' to 'Closed', one for the change in Modified date, and one for the change in BG_CLOSING_DATE Due Date property with a new date value.

It looks like there are two solutions to the problem - one involving a bit of SQL to join the rows for the change in Status to Closed and the change in Modified date; the other (simpler) solution appears to be the BG_CLOSING_DATE field.

To confirm the validity of the two solutions I need to test them.

The Test:  I extract the following two reports:
From the Excel Report Generator
SELECT

AU.AU_ENTITY_ID AS Defect_ID,
AP.AP_FIELD_NAME as Field_name,
AP.AP_NEW_VALUE as Modified

FROM AUDIT_PROPERTIES AP LEFT JOIN AUDIT_LOG AU ON AP.AP_ACTION_ID = AU.AU_ACTION_ID

WHERE AP.AP_ACTION_ID IN (SELECT AP.AP_ACTION_ID FROM AUDIT_PROPERTIES AP WHERE AP_NEW_VALUE = 'Closed')
and  AP.AP_FIELD_NAME = 'BG_VTS'

From the Defects module
Select Columns, Visible Columns = Defect ID, Status, Closing Date.
However, my fields don't include Closing Date.
Back to Project Customization | Project Entities to verify the name of the field. BG_CLOSING_DATE label has changed to Due Date in my project.
Select Columns, Visible Columns = Defect ID, Status, Due Date.

First Test: do I only have Closed defects with a BG_CLOSING_DATE field completed? Err, no. I have two defects that are not yet Closed that have dates set. Now I remember why the field label has been changed - for the past month or so we've been using the field as the "Expected" closing date for forecasting. Luckily the field doesn't appear to have been used that much, so for the two errant defects I clear the field, create a user-defined field for Due Date, add the new field to the defect form in the Workflow Script Editor, update the field for the two defects, and rename the BG_CLOSING_DATE field back to Closing Date.
Second Test: do the dates match between the SQL query and the Defects module extract?  A quick vlookup between the two extracts shows some non-matches. An examination of the defect history of the non-matches shows that the reason for the differences is due to defects that have been Closed, subsequently reopened, and then Closed again. The BG_CLOSING_DATE field appears to update with the date that the defect was last set to Closed.

Summary:  The BG_CLOSING_DATE field is useful for the date that the defect was last set to Closed (as distinct from the date the defect was last Modified. This field can be used directly in the Defects grid view.

Quality Centre - add a custom calculated field

Earlier, I added a custom field to my Defects module.

I've since added several forecasting fields for breaking down various expected delivery times, and I now want to add a custom calculation field to add the forecasts together.

Create a new custom field as detailed before.

In the Script Editor add a new sub (or modify the sub if it already exists)::

Sub Bug_FieldChange(FieldName)
  ' Call Update Hours sub to sum estimated hours
  Bug_UpdateHours
End Sub


Create a new sub (Bug_UpdateHours) as follows. Replace BG_USER_15 with your new field, and the other custom field IDs with the IDs of the fields you want to sum:

Sub Bug_UpdateHours
On Error Resume Next
'Update Total Hours
Bug_Fields(BG_USER_15).IsReadOnly = False
Bug_Fields("BG_USER_15").Value = (Bug_Fields("BG_USER_05").Value) + (Bug_Fields("BG_USER_04").Value) + _
(Bug_Fields("BG_USER_14").Value) + (Bug_Fields("BG_USER_07").Value) + _
(Bug_Fields("BG_USER_08").Value) + (Bug_Fields("BG_USER_12").Value)
Bug_Fields(BG_USER_15).IsReadOnly = True
On Error GoTo 0
End Sub

You now have a read-only field that updates automatically with the total.

Quality Center - adding a custom field

I want to add two fields, firstly a dropdown list of the application area (for easier sorting of defects), and a numerical field for entering estimated development time (for planning).

The dropdown list needs to be created first. Open the Project Customization screen (Tools | Customize) and select Project Lists. Select New List and enter a name for your list (mine is called Defect Area). At the foot of the screen select New Item to add list items. For example, I've added Login Screen, Landing Page, various dialog boxes by name, etc to the list.

Next, select Project Entities. In the hierarchical tree view expand Defect and select User Fields. This will enable the New Field button at the foot of the tree view.

Select New Field and change the Field Label and Field Type accordingly. Field Label is what you will see on the Defects module form; choose Lookup List as Field Type and under the Lookup List area select the list you created earlier. If you didn't create the list first you can still create and add a new list here using the New List button.


The second field I want to create is a numeric field for recording estimated fix times. Create a second User Field and this time set the Field Type to Number.

For both fields, note the Field Name. These will be along the lines of BG_USER_XX (where XX is a number).

Now that the fields are created, they need to be added to the Defect Module workflow if you have one, to ensure they appear on the defects form in the correct order.

Quality Center - change the name of the Defects module

Quality Center insists on calling it's module "Defects", but we don't just use QC for reporting failures. We can also use it for logging requests for change from a business/user perspective, technical enhancements from a project perspective, etc.  And that's before we get into the discussion about whether it's a bug, failure, or any other term that your project might use.

So wouldn't it be nice if the module's name could be changed to fit the way we work?

Luckily, there's a way to change the name of any module in QC.


The REPLACE_TITLE parameter enables you to change the names of Quality Center modules in one Project or across all your projects. The only inhibitor is that you must be a Site Administrator (and not just a Project Administrator) in order to make the change.

To rename modules across all projects:
Rename one or more modules by entering the following parameter value:
;;
;;
;;...
For example, if you want to change the name of the Defects module to Bugs, and the Requirements module to Goals, enter the following: Defect;Bug;Defects;Bugs;Requirement;Goal; Requirements;Goals


To rename the Defects module for a single project:
In Site Administration, click the Site Projects tab.
In the Projects list, double-click the project for which you want to rename the Defects module.
Select the DATACONST table.
In the SQL pane, type an SQL INSERT statement to insert a row into the table with the following values:
In the DC_CONST_NAME column, insert the parameter name REPLACE_TITLE.
In the DC_VALUE column, insert a string that defines the new name for the Defects module, in the following format:
original title [singular];new title [singular];original title [plural];new title [plural]

For example, to change the name of the module from Defects to Bugs, type the following SQL statement into the SQL pane:

INSERT into DATACONST VALUES ('REPLACE_TITLE', 'Defect;Bug;Defects;Bugs')

Click the Execute SQL button. The new row is added to the DATACONST table. The Quality Center project displays the new Defects module name as 'Bugs'.