Guidewire PolicyCenter – Trial Rating

One of the most important components in a guideline policy (PC) implementation is to integrate with the rating engine; This is typically either a guideline (GW) rate engine or an external rate engine such as Oracle Inbridge, Softrator, etc. When testing this key component, validation should not be limited to just table comparisons and premium calculations, but also to ensure that the data traverse is correctly the core system.

Most teams face 3 major challenges during the implementation of ratings:

Very often, a lack of communication between teams increases the defect turnaround time. Defect tripping becomes challenging to determine the starting point for debug faults. For example, a premium missmatch may be caused by a misscall in an external rating engine or a plugin issue – either incorrect input was passed from the PC to the external rate engine or results returned from the rating engine incorrectly mapped to the PC it was done.

Lack of automation for regression testing

When automatic regression testing is not performed, any change in rating logic during the user acceptance test (UAT) or at the end of the stabilization phase will increase the effort of the regression test and become a significant risk to the program.

Lack of proper documentation

Zero or minimal documentation is another important challenge that teams face on legacy replacement projects. Actuarial teams typically complete premium verification during the development phase. The scope of the test is subsequently handed over to the QA team for the first time during the stabilization phase. This reliance on actuarial teams (which are likely to be busy with other projects) can add to the learning curve for QA and adversely affect test progress and overall program timeline.

Cynosure follows proven best practices for validating QA ratings.

In an agile world, teams often continuously develop rate books, tables, and rating algorithms during the development phase. This is done even before the rates are filed and approved by the State Insurance Department. An important advantage of this approach is the development of robust algorithms and rate routines, making the overall implementation stable. Rate books are typically recorded during the stabilization phase when some changes in rate factors are expected. Any update rate factors can be exported from the system and easily compared to the changed requirements.

The following best practices have helped teams easily mitigate the challenges listed above:

When faults are identified, the test team with a large understanding of the system verifies data integrity between systems and does not limit those tests to just premium comparisons. This helps the testing team to share details on the causes of a premium mismatch that helps the Dev debug and fix issues quickly.

The Cynontin Test Automation Framework (CTAF) is built using open source tools to test ratings

Cynosure has developed a test framework using open source tools to automate rating validation. Key business and data combinations are automated. For example, liability coverage with different limits (such as a combined single limit vs. split limits) is valid; While an additional coverage is not automatic like additional electronics endorsement. Another example is covering different types of discounts such as Good Student Discount or Affinity Group Discount depending on the possibilities.

One of the best ways the team can follow is to put the name of coverage between the requirement documents and the cost attribute performance name in the GW rating engine tables (or related tables in the external rate engine). This helps the team to export and make quick comparisons. In addition, we take advantage of the guideware built into the ‘Impact Testing’ tool, which helps actuarial teams perform rapid testing when rates change. Capturing data and comparing results with previously validated premiums until rates are revised is an alternative approach that teams often follow.

Reusable Tools, Templates and Collators

The Cynosure QA team brings tools and templates for rating validation that can be customized based on the client’s requirement (such as to meet state regulations). Rules for minimum premium adjustment, premium override, penny to penny matching or premium rounding, etc. can also affect the trial effort and are employed accordingly. If legacy policies are being changed, separate premium comparisons are performed using migrated policies. Rate capping (if applicable) is also an important functionality that is very testable.

Leave a Reply

Your email address will not be published. Required fields are marked *