Data Allocation - Micro Focus UFT Tutorial

Welcome to the Data Allocation tutorial for Micro Focus UFT !

In this tutorial we'll be covering how to get started with dat allocation using a Micro Focus UFT automation framework.

This is aimed for first-time users along with users who have an existing Micro Focus UFT automation framework.

1. How it works

Introduction

Test Modeller provides the ability for Manual Testers and Automation frameworks to find and make data and importantly allocate data to tests.

The core features and benefits are:

  • The utility will hunt for the correct test data in back end databases, via APIs,, via the application (front end) or by parsing files.
  • The data is then allocated to specific tests eliminating data clashes.
  • If data cannot be found VIP can run automation to make the missing data, either directly into your testing databases, via an API or through the front and back end applications.
  • Other tools and pipelines can be called as part of the preparation framework.
  • You create standard queries that can be used by multiple teams, these are then linked to tests.
  • Standard queries can use test specific parameters.
  • Tests can request a unique lock on data that will be transacted against – no other test can use that data.
  • Any existing matched data will be checked to see if the data is still valid, does it still match the criteria.
  • Matching Data can be identified from multiple different back end databases allowing for – cross system integration testing.
  • Multiple rows of data can be allocated.

Test Modeller supports the concepts of distinct Projects and Versions within Test Modeller workspaces. Test Data Allocation, however, is separate and independent from these Projects. The result is that project teams can use the same Test Data Catalogues and Test Data Allocation pools. The organization of how you set up the catalogues and pools is up to the teams. You may find that many projects are adjacent and share common test data lookups or test in the same test environments, in these cases it makes more sense to share the test data definitions and allocation. Some examples of these are:

  • The HR development team are using automated scripts to test holiday allocation and need to select managers who are not currently on holiday to test the front end system works correctly. They will allocate unique Manager Ids and Employee Ids to each automated test case.
  • The Payroll development team need to test their holiday entitlements detection system to identify if extra holiday has been taken and to be able to manually add holidays to employees. They will create manual tests and need to add holidays to employees.

    The Payroll team needs to use the same test criteria as the HR team and need to use the same Data Allocation pool so they don’t interfere with the HR automation framework.

  • A migration team is testing migration to a new banking application from the old one.

    The accounts are split up into different types which require different transformations. There are over 200 developers and testers working in 20 scrum teams. Each scrum teams uses a core set of models (lookup new address formats) and individual transforms for each scrum team. Each scrum team has different Test Modeller projects, however there is only one development and testing environment.

Both the manual tests and the automated tests need to use the same allocation pool, so teams do not transform the same account using different transforms.

Test Data Automation

Integrating test data allocation into your automation framework enables you to specify the test data you need for your automated tests. This has two core benefits;

  1. Test data specifications can be defined for a vast array of data sources and can then be easily imported into your automation framework to facilitate the creation and /or resolution of data just-in-time before the automation executes.
  2. The test data can be specified to be allocated only to specific test cases. This avoids the problem of interlock where test cases interfere with other test data being used in other tests. Unique test data allocation ensures a data row is only consumed by the test case it is defined for. This makes automation much more robust as you always have the test data you require available, avoiding brittle test cases.
TestDataAutomation.io a tool which provides the ability for Manual Testers and Automation frameworks to find and make data and importantly allocate data to tests.

Test data allocation works in three phrases:

  1. Firstly, the test data required must be specified and exposed as a test data criterion within the data catalogue (covered in the previous sections).
  2. Secondly, the automation framework must call the data catalogue API to execute and run the data allocations (this phase finds and makes the right data then assigns it to each test instance).
  3. Thirdly, each test can retrieve the allocated data values which can then be used within the automation framework to perform the required actions across user- interfaces, APIs, mainframes, ETL environments, and much more. This is built to plug directly into an automation framework independent of the language or type of automation being executed.

In this section, we’ll cover our Micro Focus UFT library which can be directly embedded into any framework coded in the Micro Focus UFT language. We'll then cover how to use our integration with Test Modeller to generate Micro Focus UFT code with the relevant allocation code snippets embedded. The aim here is to create the right test data, at the right time, in the right places along with your test automation scripts seamlessly within the Test Modeller portal.

2. Prerequisites

Install Prerequisites

Here we'll be installing all the prerequisites you need to get started with test automation in Micro Focus UFT.

  1. Create a free TestModeller.io account
  2. Install VIP Automation Server. Follow these introductions.
  3. Install VIP TDM package. Email info@curiosity.software to recieve your copy.
  4. Download the Data Allocation UFT library from here

Demo Prerequisites

  1. A valid installation of MicroFocus UFT One
  2. If you have an existing UFT automation framework which you want to integrate it must be using the object repository to track UI elements.
  3. You can download the object repository we use in this tutorial from here.
3. Define a connection

Define a Connection

Before you can create a new SQL lookup you need to define what databases it will be connecting to. Here we will cover how to define SQL connections and link them to development and testing environments.

  1. Click on the Connections button in the side bar.

  2. SQL needs to run against specific databases and different database types. Inside Test Modeller you need to create connections that are then linked to each SQL query that will be run to find the test data. Before you begin adding in connections and SQL to modeller use an SQL Management studio to run the SQL to check the syntax and make sure you can connect to the databases being used by the application under test.

    This is SQL Server management studio – running SQL to find Leads.

  3. Once you are satisfied all is working and you can connect to the specified server. Click 'New Connection' to add a new connection Profile

  4. The entry pop-up is displayed to create a new connection. You'll need to populate this with your database connection details.

    Name: Give the connection a name, each name must be unique.

    DBMS type: Pick from the supported list of RDBMSs.

    Connection String: For the connection string use a standard Java connection string Example Connection Strings

    Data Source=xe;user id=iban;password=iban;server=BIGONE\SQL2016;user id=sa;password=xxxx;database=AdventureWorks;

    Next, enter the Database Name and Schema, you may leave these empty if you wish. These are the values that get substituted into the SQL that is run. For example, when the tool creates the SQL to find the data it will add in the database.schema to the Table Names. E.g. SELECT FIRST_NAME FROM [Schema].dbo.LEADS

    When you connect to a database you can specify the database as part of the connection string and set a default schema to autocomplete the schema and database fields. For example, Oracle will link you to specific schemas so there is no need to enter the schema and database name in the connection details.

Tip
Test it works...

You may need to set up a few simple SQL queries to test that the connections and the parameters are working correctly together. Don’t worry it may take a couple of goes to get it right.

4. Define a criteria

Test Data Catalogue

The test data catalogue is a list of standard Test Data lookups that can be used by project teams to find and make data during testing. Often, teams, developers and testers create lookups using SQL and scripts to find data. The Test Data catalogue allows users to share best practice across teams, well developed SQL and test scripts which are controlled centrally.

Define a Criteria

  1. From the side bar select the 'Catalogues' menu.

  2. You’ll be presented with the data catalogues available. Create a new data catalogue by clicking the ‘New Catalogue’ button.

  3. Enter a name for your criteria and click Save. Now, you’ll want to enter you test data catalogue by clicking on the link to open it. Within this view you can see the available find and makes you have created. Click ‘New Test Criteria’ to create a new criteria.

  4. Firstly you'll need to enter a unique name for your test criteria and a description. The execution type we'll be using will be SQL but you can also use a VIP flow to connect to many external sources which you can learn more about here

  5. Next click on the criteria tab. Once you’ve finished with the tabs, click on save. If the Test Criteria already exists, you will be prompted to rename it.

  6. The form requires you to split up the SQL into components, they will be reassembled by the Data Allocation process when you run it. These are:

    Table Name: The name of the table. You can have multiple table names here if you wish and include aliases
    Note this value us used in the pool to guarantee uniqueness, see the knowledge base article on Test Data Allocation Pools on Uniqueness

    Group By: The group by clause, if you need to group columns as part of your selection

    Order by: The Order by clause, you may use this if you want to find values sorted by values, for example find the latest TRANSACTION_ID for a recently closed account.

    SQL Criteria: The where clause inside the SQL

  7. Before you start filling in the criteria make sure you have run the SQL against your test database. Here are three examples of SQL

    • The first, is showing you how to build criteria to find opportunities. You can see that we have specified exact values to make sure the query works.

    • The second has been edited to replace the hard-coded values with wild cards. These are identified with a percentage % and a number, 1,2,3. These will be replaced at allocation time by values that your specific test case need.

      You can see in the second example a technique where it is possible to pass in an empty parameter OR ‘%1’ =’’. This allows you to call the same query with one, two or three parameters set.

    • The third example is combining a number of fields together into a string, each RDBMS will use different syntax to accomplish this, Oracle for example uses ¦¦, most others use +.

    In all three we will be returning specific columns that will be the data that is needed by the test case. The test case will then enter these returned values into the UI, API, Flat file etc. These returned columns of data will be defined in the Keys and Parameter tabs.

  8. Sometimes you may wish to retrieve lots of data from different columns using your query.

    Data Modeller requires that column data be returned as a VarChar datatype in one string. Multiple columns must be concatenated together and separated by ---. In the example above there are three columns concatenated, these will be identified as: LeadName (We’ve merged in the title, first name and last name). There is also a second column ID which has been concatenated but separated by --- this will be split out in Modeller into a second retrieved value, this will be identified by LeadID. These Output Column Name will be entered in the Keys Tab, see later.

  9. The Expected results SQL is an additional feature that looks up an expected result from a back-end database; this is often useful when using automated and manual test cases. Include additional columns here to capture extra values beyond what would normally be captured by the existing keys that are used to identify uniqueness. The same WHERE criteria to identify location will be issued for both allocated and expected results.

    An example would be:
    The key SQL is CAST(SalesOrderNumber as VARCHAR) + '---' + CAST(SalesOrderLineNumber as VARCHAR)
    The expected result SQL is cast(sum(SalesAmount - TotalProductCost) as varchar)
    The expected result would now be calculated and returned using the same keys as allocated.

  10. The criteria data tab is used to specify the characteristics of the data to be returned.

    Default How Many: Enter how many values you wish to return; this defaults to 1. You can request to have multiple values, for example, I can request to get 10 orders for a specific customer. This value can be overridden for specific test cases, the default is set when you create a new Test in an allocation pool.

    Default Unique: This will be the default for the allocated pool test case, it can be overridden. Setting this means that any retrieved values will not be used by any other tests within the allocation pool.

    Use to Make Data if no data found: This allows you to invoke a VIP flow if the allocated test returns no data. This feature means that if no data exists you can run ANY process you like to create the data. This is especially useful for automated testing whereby you would normally have to skip the test if there was no appropriate data.

    Note the input parameters of the make VIP flow must be in the same order as the input parameters of the Find Test, see ‘Using VIP to Make Data’

  11. The criteria keys allows you to specify which combinations of data can be used to uniquely represent this row of data.

    Click on New Key to enter the output columns from the SQL.

    In this example we have added in the names of our retrieved values. You have two options:

    1. Enter the SQL columns names exactly as used in your SQL and the column Key Names SQL Override will be populated for you, if it is not set.
    2. Enter Textually descriptive Names and then enter the SQL into Key Names SQL Override yourself.

    Start with first method when you begin setting up tests.

  12. The parameter tab allows you to give logical names to the inputs defined in the SQL Criteria, the %1, %2, %3 values defined earlier. It also allows you to define logical names for your output data, these names will be used inside Test Modeller when building models that need test data. It is much easier to identify a value such as ‘Customer Full Name’ rather than ‘CstFn’ for example.

  13. Click on the New Parameter button.

    The ‘In’ direction maps the %1, %2 values in the criteria. Give each of these descriptive names.

    The ‘Out’ direction maps the key values in the previous tab, you can give these different logical names.

    In all cases the order is important, once you have created the parameters you can click on a row and move it up and down by dragging it.

    Example showing a %1 being converted to the column name EmailOptin & Two output columns being returned by the Key Name SQL Override.

  14. Click on Save once you have filled in all the tabs.

5. Create an allocation pool

Allocation Pools

Allocation pools are where your data criterias and catalogues are executed. Each criteria must be assigned to a specific test within an allocation pool. This allows you to control data being consumed by the automation framework within the environment it is being executed in. Tests within an allocation pool can be assigned unique / non-unique data. This means that the returned value can be made unique for this test only, so the value will not be assigned to ANY other test. If you set a value to Non-Unique, the assigned value could be set in another non unique test or assigned an unused value.

Create a pool

  1. From the side bar select the 'Allocation Pools' menu.

  2. A list of existing Allocation Pools will be displayed (if there are any). Click on the pool Name to see the allocated tests within in pool.

    Here we'll be creating a new pool by clicking on the 'New Pool' button.

  3. Enter a unique Pool Name and link the Pool to a Test Data Catalogue from the list then click Save.

  4. You have now created an allocation pool. Next we'll create an associated allocated test.

Create a new allocated test

  1. After selecting your allocation pool, click on:

  2. Test Name: Enter a Test Name, the Test Name must be unique within the Suite

    Suite Name: Choose from existing Suite Names or enter a New One. Test Names are unique within a Suite.

    Test Criteria: Pick from the drop-down list.

    Tags: You can enter Tag names here, use Tab to create as many as you like. The Tags can be used to filter your allocation pools later.

  3. How Many: Enter how many allocated values you would like. This will default to whatever has been set in the definition of the test type. Unique: Do you want the allocated value to be just for this test only. If it is not set other tests may use the same allocated value.

    Prep environment: You can set up the data allocation to run jobs to prepare the data environment before the automation framework is run. Set this if this is a stand-alone piece of SQL to be run prior to the automation.

  4. The parameter values will come over from the Parameters associated with the Test Type definition. You can fill these in with the required values to lookup the data you need for this specific test.

  5. Click through the tabs filling in the values and then click save.

Execute a pool

  1. To test or run the allocation directly from Test Modeller, rather than as part of an automation framework you can filter the allocated tests: Click on the filter button and choose the tests you want. Useful for testing.

  2. Click run button

  3. Choose the server that connects to your system under test databases and applications. Pick the job Data Allocate and click execute. This will submit a job to try and run the allocated tests you have defined for the filters you have chosen. This is a good way to test that your criteria have been defined correctly.

  4. Click on Download Full Log and open it in an editor.

    Look through this log in detail, especially the first time you are running the allocate for a test.

    For example:
    SQL=select top 500 CAST(id as VARCHAR(100)) from dbo.OPPORTUNITIES where ( (Opportunity_Type = 'Existing business' OR 'Existing business' = '') and (Sales_Stage = 'Needs analysis' OR 'Needs analysis' = '') and (Lead_Source = 'Existing customer' OR 'Existing customer' = '')) and (CAST(id as VARCHAR(100))) not in ('2EFCE793-2207-4798-ADB9-004B558D4B9A') 2019-10-23 16:48:10-varSQlResults.count = 81 varI=3

    If there is a problem with the SQL it will show up in here. You can then adjust the definition of the Test Type and retry running the allocation.

  5. Return to the allocation pool and check the results are correct:

  6. If you click on the results button.It will show you the found values.

  7. If you click on the allocated test you will get further details.

6. Integrate into automation framework

Integrate into Micro Focus UFT

Perform and retrieve data allocations all from your Micro Focus UFT automation framework using the data allocation library.

Concepts

Test data allocation within in automation framework works in three phrases.

  1. The test data required must be specified and exposed as a test data criterion within a data catalogue.
  2. The automation framework must call the data catalogue API to execute and run the data allocations (this is what finds and makes the right data then assigns it to each test instance).
  3. Each test can retrieve the allocated data values which can then be used within the automation framework to perform the required actions across user-interfaces, APIs, mainframes, ETL environments, and much more. This is built to plug directly into an automation framework independent of the language or type of automation being executed.

Integrate into UFT Framework

  1. Integrate the Micro Focus UFT library.

    The GitHub project is available here

    You can include this in your UFT project using the UFT library available here

    Associate the Curiosity.qfl function library with your test in File > Settings > Resources >

    If you want this library to be included in all future tests, make sure to check Set As Default

  2. Firstly you need to configure the curiosity library with your details

    curiosity.ServerName = "[YOUR_SERVER]"
    curiosity.ServerUrl = "[API_URL]"
    curiosity.ApiKey = "[API_KEY]"
    
  3. Then you need to specify what test groups you want to allocate.

    call curiosity.AddAllocation("pool", "suite", "testname")
    

    Here you can specify the data allocation to connect the test with. This corresponds to three parameters:

    1. poolName – Name of the allocation pool the tests reside in.
    2. suiteName – Name of the test suite the test resides in.
    3. groups – The tests to perform allocation on. These are the allocation tests associated with the current test being tagged. Wildcards can be used to match multiple test names. The groups tag also takes a list so multiple test types can be specified.

    These three parameters must match the data values specified for each matching test case specified within the appropriate allocation pool within the portal.

  4. Run the allocation in the framework

    Before the tests are executed in UFT you'll need to perform the associated data allocations with the test. Within this function we collect all the AddAllocation functions tagged against any tests about to be executed and then call the data allocation API to perform the associated executions.

    It is more efficient to perform these operations in bulk which is why they are collected into one list and then sent for allocation as opposed to directly performing the allocation inside each individual script using the curiosity.PerformAllocation() command.

    call curiosity.PerformAllocation()
    
  5. Retrieve the allocation results in your test

    Within the test case you can retrieve the results using the curiosity.RetrieveAllocationResult function. Here you can specify the pool, suite name, and test name to retrieve the results for. Again, this must match the specifications given in the associated allocation pool within the portal. The DataAllocationResult class contains the functions to retrieve results by the column names, and column indexes as specified in the initial test criteria.

    Set res = curiosity.RetrieveAllocationResult("pool", "suite", "testname")
    
    Print res.GetValueByColumn("NAME")
    

Example Script:

curiosity.ServerName = "VIP-James"
curiosity.ServerUrl = "http://localhost:8080"
curiosity.ApiKey = "PtYawE1NRkqBmf4dy3tY6kJW5"

call curiosity.AddAllocation("SplendidUAT", "Create Oppertunity", "Default Profile_GoToUrl_PositiveName_PositiveAccountName_NegativeAmount_Save40:::Create Oppertunity_AccountName")

call curiosity.PerformAllocation()

Set res = curiosity.RetrieveAllocationResult("SplendidUAT", "Create Oppertunity", "Default Profile_GoToUrl_PositiveName_PositiveAccountName_NegativeAmount_Save40:::Create Oppertunity_AccountName")

Print res.GetValueByColumn("NAME")
7. TestModeller with data allocation

Automatically Generate Automation and Data Allocations

Use TestModeller to automatically generate automation code and data allocations to slot straight into your UFT automation framework.