Skip to main content
Solved

Why have the number of false positives increased in screening results after a recent rules or configuration update?

  • January 8, 2026
  • 1 reply
  • 2 views

Users often notice a sudden spike in false positives following a screening rules update or a change to provider configurations. This typically happens when the matching algorithms or fuzzy-match thresholds are adjusted to broader sensitivity settings. In Fenergo, updates to screening provider parameters or matching logic — such as changes to alias weighting, match confidence, or name field inclusion — can expand the range of potential matches returned for each entity, increasing the likelihood of false positives.

Best answer by jawadkhan

False positives increase when screening rule sets or fuzzy-match thresholds are adjusted to broader sensitivity settings. Recent updates to provider configurations or matching algorithms can expand the number of hits per entity.

To diagnose:

  • Verify the matching algorithm version in your screening provider configuration.
  • Review any changes to fuzzy-match parameters (e.g., Levenshtein distance, alias weighting).
  • Check whether new name fields or alternate identifiers were added in your Policy or Data Model.

To mitigate:

  • Adjust match score thresholds or confidence levels.
  • Apply exclusion lists for low-risk entities.
  • Validate configuration changes in UAT before promoting to Production.

Best Practice: Deploy rule changes in controlled environments and track false-positive ratios before go-live.

1 reply

Forum|alt.badge.img
  • Community Manager
  • Answer
  • January 15, 2026

False positives increase when screening rule sets or fuzzy-match thresholds are adjusted to broader sensitivity settings. Recent updates to provider configurations or matching algorithms can expand the number of hits per entity.

To diagnose:

  • Verify the matching algorithm version in your screening provider configuration.
  • Review any changes to fuzzy-match parameters (e.g., Levenshtein distance, alias weighting).
  • Check whether new name fields or alternate identifiers were added in your Policy or Data Model.

To mitigate:

  • Adjust match score thresholds or confidence levels.
  • Apply exclusion lists for low-risk entities.
  • Validate configuration changes in UAT before promoting to Production.

Best Practice: Deploy rule changes in controlled environments and track false-positive ratios before go-live.