There are several options for integrating Haplo applications within the wider institutional infrastructure.
As part of the implementation project, the Haplo team will discuss the required integrations to determine the best methods, using one or more of the strategies below.
The recommended approach is to use the user sync mechanism to send batch files on a schedule. These files contain all active users, and Haplo works out the users to create and block as users enter and leave the feed.
Alternatively, the Users REST API and the generic Data Import REST API can be used together to maintain the active user accounts and associated data. The Users API maintains the underlying user accounts, and the Data Import API syncs the same data as the User data and feed.
In all cases, username information must be provided so that users can log on using their institution credentials.
Typically the user feed will only contain the broad category of user, for example, Academic Researcher, Professional Services and Doctoral Researcher. More specific user roles are maintained inside the Haplo application by privileged users using the administrative user interface.
The authority to assign specific roles can be delegated, for example, Committee Representatives can maintain the list of the members of their committee.
Updating other systems from Haplo
There is often a requirement to update other institutional systems as information changes in the Haplo application, for example, updating information in the Student Records System.
Automatic feeds from Haplo
The recommended approach is to use the global observation message queue, and filter the data change messages to identify the relevant data.
With either option, you should implement a single consumer of the message queue, and forward relevant changes to the other systems.
Manual update queue
Where changes are low volume and it is not cost effective to implement an automatic integration, the Haplo application can maintain a list of changes for an administrator to apply manually in the institution’s system.
Updating Haplo from other systems
To sync data from other systems, for example Research Projects and their status, use the Data Import REST APIs.
Information about users should generally use a strategy from the User access section above, but this can be augmented with the Data Import REST APIs as long as only one method is used to update each bit of information.
Reporting and Business Intelligence
Reporting dashboards for day to day use of the application are configured during the implementation project.
For ad-hoc reporting, users can download data from these dashboards as Excel spreadsheets, or use an OData feed to connect to Business Intelligence tools such as Microsoft Power BI.
To mirror data from Haplo applications in a data warehouse, institutions can:
- use the global observation message queue to observe data as it changes,
- synchronise using the OData feed,
- implement a custom connector using the Haplo Reporting REST API.
To record all official notifications, use the global observation message queue and record the notification text as workflows progress.
In addition, the Email relay can be used to archive a copy of all email sent from the application, as well as ensuring deliverability of notifications.
Data can be imported at any time using the batch import mechanism.
During the implementation project, Haplo will work with you to import any historical data, such as that needed for PhD Manager’s data calculations and full project record.
After go-live, test environments are provided to map data models and test data import before it is imported to the live application. Please contact the Haplo support desk for assistance.
Custom integration methods
If the standard “out of the box” integrations described above are not a good fit your systems, it is possible for custom APIs and integrations to be provided. This will incur additional costs for development and ongoing support.