Skip to main content

· 6 min read

Tracker-to-Aggregate, or T2A for short, is a pattern that has been used with great success when improving the performance of program indicators in DHIS2. Program indicators are expressions based on data elements and attributes of tracked entities which can be used to calculate values based on a formula. T2A can solve the problem where it’s computationally expensive to calculate program indicators in real-time. A common symptom to this recurring problem is an endless spinning circle when opening a dashboard that computes a program indicator over millions of tracked entity instances:

Program indicator dashboard timeout

The T2A pattern favours batch computation over real-time computation and, in a program indicator context, encourages dashboards to be created from aggregate data elements instead of program indicators, removing the need for re-evaluating the former when opening the dashboards.

As announced recently in the March DHIS2 newsletter, we’ve developed a program indicator T2A tool which we're recommending to DHIS2 maintainers who have indicators that are complex or need to be calculated over large amounts of tracker events in order to reduce the load of analytic operations on the DHIS2 server since requests for pre-aggregated data is often less demanding than on-the-fly aggregation of tracker data.

The T2A tool is a Java application that periodically aggregates and collects aggregate program indicators from the DHIS2 server before pushing them back to the server as data values sets. More precisely, the Java batch job processes a matrix of program indicators, organisation units, and periods to produce data value sets. The matrix is an input argument expressed, respectively, as a program indicator group, an organisation unit level, and a set of comma-delimited periods. In contrast to real-time program indicator calculations, the data value sets produced from this matrix contain the precomputed program indicators calculations which allow you to quickly visualise the indicators from DHIS2.

In this blog post, we'll show you step-by-step how to configure DHIS2 for T2A and run the T2A tool.

Configuring DHIS2

Prior to starting the batch job, you need to have the following configured in DHIS2:

  1. All the relevant program indicators assigned to the same program indicator group (consult the DHIS2 documentation to learn how to create a program indicator group)
  2. A non-mandatory program indicator text attribute for holding the aggregate data element code (consult the DHIS2 documentation to learn how to create an attribute)
  3. The aggregate data elements, identifiable by codes, to which the relevant program indicator will be mapped to
  4. The target program indicators mapped to aggregate data element codes so that the T2A tool can lookup the precomputed results of program indicators by their corresponding aggregate data element codes.

A number of DHIS2 metadata packages, like the COVID-19 Electronic Immunization Registry metadata package, have steps 1 and 2 preconfigured. In such cases, all you need is to skip directly to step 3. This means creating a data element configured to be a Domain type of Aggregate:

Data element config

The aggregate data element’s Aggregation type should depend on the program indicator output it will map to. For example, a data element mapped to an indicator counting tracked entity instances (i.e., V{tei_count}) should have its Aggregation type set to Count.

Once the aggregate data element is configured, the corresponding program indicator is edited to have its custom text attribute (see step 2) set to the aggregate data element code as shown below:

Program indicator config

In the above example, the indicator is configured to have its precomputed result mapped to the aggregate data element CVC_EIR_AGG_PPL_1ST_DOSE.

Running T2A

The following example shows how to run the T2A program from the shell of a Unix-like system with Java 11 installed:

./dhis2-t2a.jar --dhis2.api.url=https://play.dhis2.org/2.37.2/api \
--dhis2.api.username=admin \
--dhis2.api.password=district \
--org.unit.level=3 \
--periods=2022Q1,2022Q2,2022Q3,2022Q4 \
--pi.group.id=Lesc1szBJGe

All arguments can also be expressed as OS environment variables or in a config file as explained in the project’s documentation. Minimally, the program requires as arguments the DHIS2 Web API URL along with the credentials of the DHIS2 user which the program will run as. Apart from this, the program indicator group ID, organisation unit level, and periods are also required. As noted earlier, these three arguments are expanded to form a matrix that the program will iterate over.

By default, after the program has started, the job will run daily at midnight but this can be easily changed by specifying a cron expression:

./dhis2-t2a.jar --dhis2.api.url=https://play.dhis2.org/2.37.2/api \
--dhis2.api.username=admin \
--dhis2.api.password=district \
--org.unit.level=3 \
--periods=2022Q1,2022Q2,2022Q3,2022Q4 \
--pi.group.id=Lesc1szBJGe \
--schedule.expression=0 0 12 * * ?

You can even hit the URL http://localhost:8081/dhis2/t2a to manually kick off a job run. The application will return immediately an HTTP response but it will execute the T2A process in the background. For security reasons, it’s strongly recommended that the program sits behind a gateway restricting HTTP access. The HTTP listener address is customised as shown in the next example:

./dhis2-t2a.jar --dhis2.api.url=https://play.dhis2.org/2.37.2/api \
--dhis2.api.username=admin \
--dhis2.api.password=district \
--org.unit.level=3 \
--periods=2022Q1,2022Q2,2022Q3,2022Q4 \
--pi.group.id=Lesc1szBJGe \
--http.endpoint.uri=http://0.0.0.0:8080/

The processing can be distributed across multiple threads with the thread.pool.size argument should the job take too long to complete its run. This argument should be used with caution given that more threads lead to more load on the DHIS2 server:

./dhis2-t2a.jar --dhis2.api.url=https://play.dhis2.org/2.37.2/api \
--dhis2.api.username=admin \
--dhis2.api.password=district \
--org.unit.level=3 \
--periods=2022Q1,2022Q2,2022Q3,2022Q4 \
--pi.group.id=Lesc1szBJGe \
--thread.pool.size=3

DHIS2 precomputes the program indicators during event analytics. By default, the event analytics job is triggered from the T2A batch job, however, this can be disabled since it may be redundant to run the event analytics job when it has already been recently run in a different context:

./dhis2-t2a.jar --dhis2.api.url=https://play.dhis2.org/2.37.2/api \
--dhis2.api.username=admin \
--dhis2.api.password=district \
--org.unit.level=3 \
--periods=2022Q1,2022Q2,2022Q3,2022Q4 \
--pi.group.id=Lesc1szBJGe \
--run.event.analytics=false

After completing analytics, T2A fetches all the precomputed indicators in the program indicator group, and pushes them as data value sets to the DHIS2 server. There are various performance modes how this processing can happen, depending on the way the tool is configured. For instance, when the argument org.unit.batch.size is set to its default value of 1, T2A will process individually every organisation unit for each program indicator:

Bumping org.unit.batch.size to 2 will reduce the network chattiness between T2A and DHIS2 at the expense of adding more workload on the DHIS2 server:

Going even one step further, the periods can be batched alongside the organisation units by setting the argument split.periods to false:

In this post, we've walked you through the new T2A tool which we invite you to use to accelerate your page load time when viewing program indicators while keeping a sustainable load on the DHIS2 server.

The second release candidate of T2A has recently been published and is available for download from the project’s GitHub release page. As always, feedback is more than welcome at DHIS2’s community of practice.

· 9 min read

DHIS2 is a platform that can receive and host data from different sources, while it can also share data with other systems and reporting mechanisms. Integrating with DHIS2, or building any integration for that matter, requires manual or automated testing of the integration itself. The growth of container technology, and in particular Docker, has reduced the pain of automating the testing of integrations. By automating, I mean self-contained integration test suites that run out-of-the-box and require no manual setup of their external runtime dependencies (Docker Engine is assumed to be installed on the machine running the tests).

DHIS2 releases are already published as Docker images to Docker Hub (see how to get a Docker container up and running in our Getting Started Guide). This post demonstrates how a project integrating with DHIS2, such as connecting DHIS2 with another data collection tool, can have its tests automated with Docker. The code examples shown are specific to Java 11 and JUnit 5 but can be adapted to many other programming languages and test frameworks. The complete code example is available on GitHub for those who want to take a deep dive into the code.

Application Under Test

We begin with a brief description of the Java application under test. A bare-bones solution for sharing the aggregate data of the national DHIS2 system with a regional DHIS2 server. In concrete terms, the code synchronises, in one direction, the data value sets between two DHIS2 instances configured with different organisation units:

...
...

public final class IntegrationApp
{
public static void main( String[] args )
{
String sourceDhis2ApiUrl = args[0];
String sourceDhis2ApiUsername = args[1];
String sourceDhis2ApiPassword = args[2];
String sourceOrgUnitId = args[3];

String targetDhis2ApiUrl = args[4];
String targetDhis2ApiUsername = args[5];
String targetDhis2ApiPassword = args[6];
String targetOrgUnitId = args[7];

String dataSetId = args[8];
String period = args[9];

...
}
}

The entry point of IntegrationApp expects arguments identifying both the source and target DHIS2 servers, user accounts, as well as organisation units. Besides these inputs, it expects the data set UID and period for the data value sets that IntegrationApp will pull down from the source server.

Given these arguments, the application leverages the convenient HTTP client library Unirest to fetch the JSON data value sets from the specified source DHIS2 instance and push them to the destination:

...
...

public final class IntegrationApp
{
public static void main( String[] args )
{
...
...

// pull data value sets from source DHIS2 instance
HttpResponse<JsonNode> dataValueSets = Unirest.get(
sourceDhis2ApiUrl + "/dataValueSets?dataSet={dataSetId}&period={period}&orgUnit={orgUnitId}" )
.routeParam( "dataSetId", dataSetId )
.routeParam( "period", period )
.routeParam( "orgUnitId", sourceOrgUnitId )
.basicAuth( sourceDhis2ApiUsername, sourceDhis2ApiPassword ).asJson();

// replace source org unit ID with target org unit ID
dataValueSets.getBody().getObject().put( "orgUnit", targetOrgUnitId );
for ( Object dataValue : dataValueSets.getBody().getObject().getJSONArray( "dataValues" ) )
{
((JSONObject) dataValue).put( "orgUnit", targetOrgUnitId );
}

// push data value sets to destination DHIS2 instance
Unirest.post( targetDhis2ApiUrl + "/dataValueSets" )
.contentType( ContentType.APPLICATION_JSON.toString() )
.body( dataValueSets.getBody() )
.basicAuth( targetDhis2ApiUsername, targetDhis2ApiPassword ).asString();
}
}

Note that, before uploading the data value sets, the application swaps out the source organisation unit UIDs with the target ones. The downstream DHIS2 has distinct organisation unit UIDs so it can’t recognise the UIDs from the upstream server.

Integration Test

The following sections describe the JUnit integration test covering the application's happy path. For IntegrationApp to behave correctly, the test case stands up the source and target DHIS2 instances before proceeding to seed them with test data. The DHIS2 servers, along with their PostgreSQL databases, are spun up and wired with the help of Testcontainers. Testcontainers is a delightful polyglot library that allows you to create referenceable Docker containers from within your test case.

Container Set Up

With Testcontainers, the DHIS2 web app and PostgreSQL Docker containers are created for both the source and target:

...
...

@Testcontainers
public class IntegrationAppTestCase
{
@Container
public static final PostgreSQLContainer<?> SOURCE_POSTGRESQL_CONTAINER = newPostgreSqlContainer();

@Container
public static final GenericContainer<?> SOURCE_DHIS2_CONTAINER = newDhis2Container( SOURCE_POSTGRESQL_CONTAINER );


@Container
public static final PostgreSQLContainer<?> TARGET_POSTGRESQL_CONTAINER = newPostgreSqlContainer();

@Container
public static final GenericContainer<?> TARGET_DHIS2_CONTAINER = newDhis2Container( TARGET_POSTGRESQL_CONTAINER );

...
...
}

Jumping to the newPostgreSqlContainer method reveals the following:

private static PostgreSQLContainer<?> newPostgreSqlContainer()
{
return new PostgreSQLContainer<>( DockerImageName.parse( "postgis/postgis:12-3.2-alpine" )
.asCompatibleSubstituteFor( "postgres" ) )
.withDatabaseName( "dhis2" )
.withNetworkAliases( "db" )
.withUsername( "dhis" )
.withPassword( "dhis" ).withNetwork( Network.newNetwork() );
}

newPostgreSqlContainer launches a PostgreSQL container based on the postgis/postgis:12-3.2-alpine image. The container is created on a new network in order to prevent network alias collisions with the second PostgreSQL container. Similar to newPostgreSqlContainer, newDhis2Container creates a DHIS2 container from the dhis2/core:2.36.7 image:

private static GenericContainer<?> newDhis2Container( PostgreSQLContainer<?> postgreSqlContainer )
{
return new GenericContainer<>( DockerImageName.parse( "dhis2/core:2.36.7" ) )
.dependsOn( postgreSqlContainer )
.withClasspathResourceMapping( "dhis.conf", "/DHIS2_home/dhis.conf", BindMode.READ_WRITE )
.withNetwork( postgreSqlContainer.getNetwork() ).withExposedPorts( 8080 )
.waitingFor( new HttpWaitStrategy().forStatusCode( 200 ) )
.withEnv( "WAIT_FOR_DB_CONTAINER", "db" + ":" + 5432 + " -t 0" );
}

Here's a rundown of the DHIS2 container configuration:

  • the container connects to the same network as the given PostgreSQLContainer. This permits the containers to talk to one another.
  • the image-specific environment parameter WAIT_FOR_DB_CONTAINER is set so that the DHIS2 container waits until the database port 5432 is reachable before it starts: the database needs to be in a ready state before DHIS2 can initialise.
  • waitingFor blocks the test runner from executing any further until the DHIS2 server is able to accept HTTP requests.
  • the DHIS2 config is sourced from the host dhis.conf, located in the Java test classpath:

    connection.dialect = org.hibernate.dialect.PostgreSQLDialect
    connection.driver_class = org.postgresql.Driver
    connection.url = jdbc:postgresql://db:5432/dhis2
    connection.username = dhis
    connection.password = dhis

    As shown above, the host's dhis.conf addresses the database container by its network alias. Keep in mind that the database port no. 5432 is not reachable from the outside world, but only reachable from within the DHIS2 container, because the Docker network is isolated from the host's network.

Data Set Up

The next step, as part of the test setup, is seeding the DHIS2 instances using the nifty web service testing library REST Assured. REST Assured sends HTTP requests to the DHIS2 web service endpoints defined as:

@BeforeAll
public static void beforeAll()
throws IOException
{
sourceDhis2ApiUrl = String.format( "http://localhost:%s/api", SOURCE_DHIS2_CONTAINER.getFirstMappedPort() );
targetDhis2ApiUrl = String.format( "http://localhost:%s/api", TARGET_DHIS2_CONTAINER.getFirstMappedPort() );

...
...
}

sourceDhis2ApiUrl and targetDhis2ApiUrl point to the DHIS2 API URLs of the DHIS2 containers. It's worth highlighting that the HTTP port numbers of the DHIS2 servers are obtained with GenericContainer#getFirstMappedPort(). These URLs serve as the base paths for the REST Assured request templates seen next:

@BeforeAll
public static void beforeAll()
throws IOException
{
...
...

sourceRequestSpec = new RequestSpecBuilder().setBaseUri( sourceDhis2ApiUrl ).build()
.contentType( ContentType.JSON ).auth().preemptive().basic( DHIS2_API_USERNAME, DHIS2_API_PASSWORD );

targetRequestSpec = new RequestSpecBuilder().setBaseUri( targetDhis2ApiUrl ).build()
.contentType( ContentType.JSON ).auth().preemptive().basic( DHIS2_API_USERNAME, DHIS2_API_PASSWORD );

...
...
}

sourceRequestSpec is the request template REST Assured uses to build the API requests for the source DHIS2 container. In the same fashion, requests for the target DHIS2 container are based on targetRequestSpec.

In the subsequent code, we can observe the request templates being passed around to seed the DHIS2 servers:

@BeforeAll
public static void beforeAll()
throws IOException
{
...
...

sourceOrgUnitId = createOrgUnit( sourceRequestSpec );
targetOrgUnitId = createOrgUnit( targetRequestSpec );

createOrgUnitLevel( sourceRequestSpec );
createOrgUnitLevel( targetRequestSpec );

addOrgUnitToUser( sourceOrgUnitId, ADMIN_USER_ID, sourceRequestSpec );
addOrgUnitToUser( targetOrgUnitId, ADMIN_USER_ID, targetRequestSpec );

importMetaData( sourceRequestSpec );
importMetaData( targetRequestSpec );

addOrgUnitToDataSet( sourceOrgUnitId, MALARIA_STOCK_DATA_SET_ID, sourceRequestSpec );
addOrgUnitToDataSet( targetOrgUnitId, MALARIA_STOCK_DATA_SET_ID, targetRequestSpec );

...
}

Apart from creating test organisation units and assigning permissions, the @BeforeAll hook imports the Malaria Aggregate metadata package into both instances with the importMetaData method.

Let's drill into the createOrgUnit method for a general idea of how the request template is handled:

private static String createOrgUnit( RequestSpecification requestSpec )
{
Map<String, ? extends Serializable> orgUnit = Map.of( "name", "Acme",
"shortName", "Acme",
"openingDate", new Date().getTime() );

return given( requestSpec ).body( orgUnit )
.when().post( "/organisationUnits" )
.then().statusCode( 201 )
.extract().path( "response.uid" );
}

Assuming the HTTP response status code is 201, createOrgUnit creates an organisation unit named Acme and returns its UID to the caller. beforeAll calls this method twice: once for the source DHIS2 and another for the target DHIS2. The returned UIDs are used as parameters for creating other DHIS2 resources and running IntegrationApp.

The final step in beforeAll is populating the source with the data value sets:

@BeforeAll
public static void beforeAll()
throws IOException
{
...
...

createDataValueSets( sourceOrgUnitId, MALARIA_STOCK_DATA_SET_ID, sourceRequestSpec );
}

createDataValueSets seeds the source instance with data value sets capturing the Malaria stock data. The data set itself is defined in the imported metadata package. Stepping into the method we find:

private static void createDataValueSets( String orgUnitId, String dataSetId, RequestSpecification requestSpec )
{
List<Map<String, String>> dataValues = List.of(
Map.of( "dataElement", "CBKXL15dSwQ",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) ),
Map.of( "dataElement", "BdRI37FNDJs",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) ),
Map.of( "dataElement", "RRA1O37nLn0",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) ),
Map.of( "dataElement", "CPBuuIiDnn8",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) ),
Map.of( "dataElement", "HOEMlLX5SMC",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) ),
Map.of( "dataElement", "f7z0IhHVWBT",
"value", String.valueOf( ThreadLocalRandom.current().nextInt( 0, Integer.MAX_VALUE ) ) );

Map<String, Object> dataValueSet = Map.of( "dataSet", dataSetId,
"completeDate", "2022-02-03",
"period", "202201",
"orgUnit", orgUnitId,
"dataValues", dataValues );

given( requestSpec ).body( dataValueSet ).
when().post( "/dataValueSets" ).
then().statusCode( 200 );
}

A list of data values is created where each data value is assigned a random integer and a hard-coded UID of a data element defined in the metadata package. The data values are collected into a data value set and POSTed to the source server.

Test Method

Last but not least is the test itself:

@Test
public void test()
{
IntegrationApp.main(
new String[]{ sourceDhis2ApiUrl,
DHIS2_API_USERNAME,
DHIS2_API_PASSWORD,
sourceOrgUnitId,
targetDhis2ApiUrl,
DHIS2_API_USERNAME,
DHIS2_API_PASSWORD,
targetOrgUnitId,
MALARIA_STOCK_DATA_SET_ID,
"202201"
});

given( targetRequestSpec ).get(
"/dataValueSets?dataSet={dataSetId}&period={period}&orgUnit={orgUnitId}", MALARIA_STOCK_DATA_SET_ID,
"202201",
targetOrgUnitId ).
then().statusCode( 200 ).body( "dataValues.size()", equalTo( 6 ) );
}

The entry point of IntegrationApp is invoked with the expected list of parameters described earlier. The test post condition is expressed as a REST Assured statement, asserting that (1) the target organisation's malaria stock data value set can be successfully fetched for the 202201 period and (2) the data value set has 6 data values, equal to the number of data values POSTed to the source server.

Do you have comments about this approach to integration testing? We love hearing your thoughts over at the Community of Practice discussion board.

· 11 min read

The DHIS2 App Platform now supports PWA capabilities in apps made with the platform! The Dashboard App will be the first core app to take advantage of these features to enable offline capability, and it will be used as an example in this article to describe the details of these features.

This article will give a brief overview of the new features available and some examples that illustrate how they can be used. A future article will go into detail about the technical decisions behind these features and their designs.

· One min read

Every year, DigitalOcean and other partners sponsor Hacktoberfest to encourage open-source contributions. Contributors who make 4 or more useful pull-requests will be eligible to receive a free Hacktoberfest t-shirt. We also encourage you to consider the environmentally-conscious option of planting trees instead 🌳🎉

If you contribute (by opening a pull request which gets approved) to any open-source DHIS2 repository during the month of October, your contribution will count towards the 4 pull-request minimum required to claim your reward. Get hacking!

· 6 min read

As of mid-July 2020, the Chrome (and Chromium) stable release channel has started to disable cross-site cookies by default. Mozilla Firefox has pushed this change to their beta channel and will likely release it to the stable channel soon. This change affects any DHIS2 application running on a different domain than the DHIS2 server instance, including applications running on localhost in development. It does not affect cross-site API requests which use Basic or OAuth authentication headers, as those do not rely on cookies for authentication.

· 14 min read

We've recently released @dhis2/ui version 5. It unifies ui-core, ui-widgets and ui-forms to simplify the user experience and allow for some architectural changes. In this post we'll go through the most important changes to try and help you with the upgrading process. To view a complete list of all the changes see the changelog.