Skip to content

Quality - 0.0.1

Coverage

Statement 86.30 Branch 76.04

Run complex data quality rules using simple SQL in a batch or streaming Spark application at scale.

Write rules using simple SQL or create re-usable functions via SQL Lambdas

Your rules are just versioned data, store them wherever convenient, use them by simply defining a column.

  • 🆕 - Simplified aggExpr - control the types once and handles decimal precision issues
  • 🆕 - Higher Order Functions - pass lambdas to lambdas, partially apply them, return them and use them in Spark sql functions

Rules are evaluated lazily during Spark actions, such as writing a row, with results saved in a single predicatable and extensible column.

Enhanced Spark Functionality

Lookup Functions are distributed across the Spark cluster and held in memory, as such no shuffling is required where the shuffling introduced by joins may be too expensive:

  • Support for massive Bloom Filters while retaining FPP (i.e. several billion items at 0.001 would not fit into a normal 2gb byte array)
  • Map lookup expressions for exact lookups and contains tests, using broadcast variables under the hood they are a great fit for small reference data sets

  • Lambda Functions - user provided re-usable sql functions over late binded columns

  • Fast PRNG's exposing RandomSource allowing plugable and stable generation across the cluster

  • Aggregate functions over Maps expandable with simple SQL Lambdas

  • Row ID expressions including guaranteed unique row IDs (based on MAC address guarantees)

Plus a collection of handy functions to integrate it all.


Last update: March 27, 2023 09:08:01
Created: March 27, 2023 09:08:01