Welcome to the Data Allocation tutorial for Micro Focus UFT !
In this tutorial we'll be covering how to get started with dat allocation using a Micro Focus UFT automation framework.
This is aimed for first-time users along with users who have an existing Micro Focus UFT automation framework.
Test Modeller provides the ability for Manual Testers and Automation frameworks to find and make data and importantly allocate data to tests.
The core features and benefits are:
Test Modeller supports the concepts of distinct Projects and Versions within Test Modeller workspaces. Test Data Allocation, however, is separate and independent from these Projects. The result is that project teams can use the same Test Data Catalogues and Test Data Allocation pools. The organization of how you set up the catalogues and pools is up to the teams. You may find that many projects are adjacent and share common test data lookups or test in the same test environments, in these cases it makes more sense to share the test data definitions and allocation. Some examples of these are:
The Payroll development team need to test their holiday entitlements detection system to identify if extra holiday has been taken and to be able to manually add holidays to employees. They will create manual tests and need to add holidays to employees.
The Payroll team needs to use the same test criteria as the HR team and need to use the same Data Allocation pool so they don’t interfere with the HR automation framework.
A migration team is testing migration to a new banking application from the old one.
The accounts are split up into different types which require different transformations. There are over 200 developers and testers working in 20 scrum teams. Each scrum teams uses a core set of models (lookup new address formats) and individual transforms for each scrum team. Each scrum team has different Test Modeller projects, however there is only one development and testing environment.
Both the manual tests and the automated tests need to use the same allocation pool, so teams do not transform the same account using different transforms.
Integrating test data allocation into your automation framework enables you to specify the test data you need for your automated tests. This has two core benefits;
Test data allocation works in three phrases:
In this section, we’ll cover our Micro Focus UFT library which can be directly embedded into any framework coded in the Micro Focus UFT language. We'll then cover how to use our integration with Test Modeller to generate Micro Focus UFT code with the relevant allocation code snippets embedded. The aim here is to create the right test data, at the right time, in the right places along with your test automation scripts seamlessly within the Test Modeller portal.
Here we'll be installing all the prerequisites you need to get started with test automation in Micro Focus UFT.
Before you can create a new SQL lookup you need to define what databases it will be connecting to. Here we will cover how to define SQL connections and link them to development and testing environments.
Click on the Connections button in the side bar.
SQL needs to run against specific databases and different database types. Inside Test Modeller you need to create connections that are then linked to each SQL query that will be run to find the test data. Before you begin adding in connections and SQL to modeller use an SQL Management studio to run the SQL to check the syntax and make sure you can connect to the databases being used by the application under test.
This is SQL Server management studio – running SQL to find Leads.
Once you are satisfied all is working and you can connect to the specified server. Click 'New Connection' to add a new connection Profile
The entry pop-up is displayed to create a new connection. You'll need to populate this with your database connection details.
Name: Give the connection a name, each name must be unique.
DBMS type: Pick from the supported list of RDBMSs.
Connection String: For the connection string use a standard Java connection string Example Connection Strings
Data Source=xe;user id=iban;password=iban;server=BIGONE\SQL2016;user id=sa;password=xxxx;database=AdventureWorks;
Next, enter the Database Name and Schema, you may leave these empty if you wish. These are the values that get substituted into the SQL that is run. For example, when the tool creates the SQL to find the data it will add in the database.schema to the Table Names. E.g. SELECT FIRST_NAME FROM [Schema].dbo.LEADS
When you connect to a database you can specify the database as part of the connection string and set a default schema to autocomplete the schema and database fields. For example, Oracle will link you to specific schemas so there is no need to enter the schema and database name in the connection details.
You may need to set up a few simple SQL queries to test that the connections and the parameters are working correctly together. Don’t worry it may take a couple of goes to get it right.
The test data catalogue is a list of standard Test Data lookups that can be used by project teams to find and make data during testing. Often, teams, developers and testers create lookups using SQL and scripts to find data. The Test Data catalogue allows users to share best practice across teams, well developed SQL and test scripts which are controlled centrally.
From the side bar select the 'Catalogues' menu.
You’ll be presented with the data catalogues available. Create a new data catalogue by clicking the ‘New Catalogue’ button.
Enter a name for your criteria and click Save. Now, you’ll want to enter you test data catalogue by clicking on the link to open it. Within this view you can see the available find and makes you have created. Click ‘New Test Criteria’ to create a new criteria.
Firstly you'll need to enter a unique name for your test criteria and a description. The execution type we'll be using will be SQL but you can also use a VIP flow to connect to many external sources which you can learn more about here
Next click on the criteria tab. Once you’ve finished with the tabs, click on save. If the Test Criteria already exists, you will be prompted to rename it.
The form requires you to split up the SQL into components, they will be reassembled by the Data Allocation process when you run it. These are:
Table Name: The name of the table. You can have multiple table names here if you wish and include aliases
Note this value us used in the pool to guarantee uniqueness, see the knowledge base article on Test Data Allocation Pools on Uniqueness
Group By: The group by clause, if you need to group columns as part of your selection
Order by: The Order by clause, you may use this if you want to find values sorted by values, for example find the latest TRANSACTION_ID for a recently closed account.
SQL Criteria: The where clause inside the SQL
Before you start filling in the criteria make sure you have run the SQL against your test database. Here are three examples of SQL
The first, is showing you how to build criteria to find opportunities. You can see that we have specified exact values to make sure the query works.
The second has been edited to replace the hard-coded values with wild cards. These are identified with a percentage % and a number, 1,2,3. These will be replaced at allocation time by values that your specific test case need.
You can see in the second example a technique where it is possible to pass in an empty parameter OR ‘%1’ =’’. This allows you to call the same query with one, two or three parameters set.
The third example is combining a number of fields together into a string, each RDBMS will use different syntax to accomplish this, Oracle for example uses ¦¦, most others use +.
In all three we will be returning specific columns that will be the data that is needed by the test case. The test case will then enter these returned values into the UI, API, Flat file etc. These returned columns of data will be defined in the Keys and Parameter tabs.
Sometimes you may wish to retrieve lots of data from different columns using your query.
Data Modeller requires that column data be returned as a VarChar datatype in one string. Multiple columns must be concatenated together and separated by ---. In the example above there are three columns concatenated, these will be identified as: LeadName (We’ve merged in the title, first name and last name). There is also a second column ID which has been concatenated but separated by --- this will be split out in Modeller into a second retrieved value, this will be identified by LeadID. These Output Column Name will be entered in the Keys Tab, see later.
The Expected results SQL is an additional feature that looks up an expected result from a back-end database; this is often useful when using automated and manual test cases. Include additional columns here to capture extra values beyond what would normally be captured by the existing keys that are used to identify uniqueness. The same WHERE criteria to identify location will be issued for both allocated and expected results.
An example would be:
The key SQL is CAST(SalesOrderNumber as VARCHAR) + '---' + CAST(SalesOrderLineNumber as VARCHAR)
The expected result SQL is cast(sum(SalesAmount - TotalProductCost) as varchar)
The expected result would now be calculated and returned using the same keys as allocated.
The criteria data tab is used to specify the characteristics of the data to be returned.
Default How Many: Enter how many values you wish to return; this defaults to 1. You can request to have multiple values, for example, I can request to get 10 orders for a specific customer. This value can be overridden for specific test cases, the default is set when you create a new Test in an allocation pool.
Default Unique: This will be the default for the allocated pool test case, it can be overridden. Setting this means that any retrieved values will not be used by any other tests within the allocation pool.
Use to Make Data if no data found: This allows you to invoke a VIP flow if the allocated test returns no data. This feature means that if no data exists you can run ANY process you like to create the data. This is especially useful for automated testing whereby you would normally have to skip the test if there was no appropriate data.
Note the input parameters of the make VIP flow must be in the same order as the input parameters of the Find Test, see ‘Using VIP to Make Data’
The criteria keys allows you to specify which combinations of data can be used to uniquely represent this row of data.
Click on New Key to enter the output columns from the SQL.
In this example we have added in the names of our retrieved values. You have two options:
Start with first method when you begin setting up tests.
The parameter tab allows you to give logical names to the inputs defined in the SQL Criteria, the %1, %2, %3 values defined earlier. It also allows you to define logical names for your output data, these names will be used inside Test Modeller when building models that need test data. It is much easier to identify a value such as ‘Customer Full Name’ rather than ‘CstFn’ for example.
Click on the New Parameter button.
The ‘In’ direction maps the %1, %2 values in the criteria. Give each of these descriptive names.
The ‘Out’ direction maps the key values in the previous tab, you can give these different logical names.
In all cases the order is important, once you have created the parameters you can click on a row and move it up and down by dragging it.
Example showing a %1 being converted to the column name EmailOptin & Two output columns being returned by the Key Name SQL Override.
Click on Save once you have filled in all the tabs.
Allocation pools are where your data criterias and catalogues are executed. Each criteria must be assigned to a specific test within an allocation pool. This allows you to control data being consumed by the automation framework within the environment it is being executed in. Tests within an allocation pool can be assigned unique / non-unique data. This means that the returned value can be made unique for this test only, so the value will not be assigned to ANY other test. If you set a value to Non-Unique, the assigned value could be set in another non unique test or assigned an unused value.
From the side bar select the 'Allocation Pools' menu.
A list of existing Allocation Pools will be displayed (if there are any). Click on the pool Name to see the allocated tests within in pool.
Here we'll be creating a new pool by clicking on the 'New Pool' button.
Enter a unique Pool Name and link the Pool to a Test Data Catalogue from the list then click Save.
You have now created an allocation pool. Next we'll create an associated allocated test.
After selecting your allocation pool, click on:
Test Name: Enter a Test Name, the Test Name must be unique within the Suite
Suite Name: Choose from existing Suite Names or enter a New One. Test Names are unique within a Suite.
Test Criteria: Pick from the drop-down list.
Tags: You can enter Tag names here, use Tab to create as many as you like. The Tags can be used to filter your allocation pools later.
How Many: Enter how many allocated values you would like. This will default to whatever has been set in the definition of the test type. Unique: Do you want the allocated value to be just for this test only. If it is not set other tests may use the same allocated value.
Prep environment: You can set up the data allocation to run jobs to prepare the data environment before the automation framework is run. Set this if this is a stand-alone piece of SQL to be run prior to the automation.
The parameter values will come over from the Parameters associated with the Test Type definition. You can fill these in with the required values to lookup the data you need for this specific test.
To test or run the allocation directly from Test Modeller, rather than as part of an automation framework you can filter the allocated tests: Click on the filter button and choose the tests you want. Useful for testing.
Click run button
Choose the server that connects to your system under test databases and applications. Pick the job Data Allocate and click execute. This will submit a job to try and run the allocated tests you have defined for the filters you have chosen. This is a good way to test that your criteria have been defined correctly.
Click on Download Full Log and open it in an editor.
Look through this log in detail, especially the first time you are running the allocate for a test.
SQL=select top 500 CAST(id as VARCHAR(100)) from dbo.OPPORTUNITIES where ( (Opportunity_Type = 'Existing business' OR 'Existing business' = '') and (Sales_Stage = 'Needs analysis' OR 'Needs analysis' = '') and (Lead_Source = 'Existing customer' OR 'Existing customer' = '')) and (CAST(id as VARCHAR(100))) not in ('2EFCE793-2207-4798-ADB9-004B558D4B9A') 2019-10-23 16:48:10-varSQlResults.count = 81 varI=3
If there is a problem with the SQL it will show up in here. You can then adjust the definition of the Test Type and retry running the allocation.
Return to the allocation pool and check the results are correct:
If you click on the results button.It will show you the found values.
If you click on the allocated test you will get further details.
Perform and retrieve data allocations all from your Micro Focus UFT automation framework using the data allocation library.
Test data allocation within in automation framework works in three phrases.
Integrate the Micro Focus UFT library.
The GitHub project is available here
You can include this in your UFT project using the UFT library available here
Associate the Curiosity.qfl function library with your test in File > Settings > Resources >
If you want this library to be included in all future tests, make sure to check Set As Default
Firstly you need to configure the curiosity library with your details
curiosity.ServerName = "[YOUR_SERVER]" curiosity.ServerUrl = "[API_URL]" curiosity.ApiKey = "[API_KEY]"
Then you need to specify what test groups you want to allocate.
call curiosity.AddAllocation("pool", "suite", "testname")
Here you can specify the data allocation to connect the test with. This corresponds to three parameters:
These three parameters must match the data values specified for each matching test case specified within the appropriate allocation pool within the portal.
Run the allocation in the framework
Before the tests are executed in UFT you'll need to perform the associated data allocations with the test. Within this function we collect all the AddAllocation functions tagged against any tests about to be executed and then call the data allocation API to perform the associated executions.
It is more efficient to perform these operations in bulk which is why they are collected into one list and then sent for allocation as opposed to directly performing the allocation inside each individual script using the curiosity.PerformAllocation() command.
Retrieve the allocation results in your test
Within the test case you can retrieve the results using the curiosity.RetrieveAllocationResult function. Here you can specify the pool, suite name, and test name to retrieve the results for. Again, this must match the specifications given in the associated allocation pool within the portal. The DataAllocationResult class contains the functions to retrieve results by the column names, and column indexes as specified in the initial test criteria.
Set res = curiosity.RetrieveAllocationResult("pool", "suite", "testname") Print res.GetValueByColumn("NAME")
curiosity.ServerName = "VIP-James" curiosity.ServerUrl = "http://localhost:8080" curiosity.ApiKey = "PtYawE1NRkqBmf4dy3tY6kJW5" call curiosity.AddAllocation("SplendidUAT", "Create Oppertunity", "Default Profile_GoToUrl_PositiveName_PositiveAccountName_NegativeAmount_Save40:::Create Oppertunity_AccountName") call curiosity.PerformAllocation() Set res = curiosity.RetrieveAllocationResult("SplendidUAT", "Create Oppertunity", "Default Profile_GoToUrl_PositiveName_PositiveAccountName_NegativeAmount_Save40:::Create Oppertunity_AccountName") Print res.GetValueByColumn("NAME")
Use TestModeller to automatically generate automation code and data allocations to slot straight into your UFT automation framework.