Industry: Insurance
Solution Space: Big Data / Cloud (AWS) / Hadoop / Spark
Our customer in the life insurance space provides mobile airtime payment coupled insurance solutions. This generates a significant amount of data that must be processed and archived.
Our big data engineers automated the instantiation of: an AWS EC2 cluster, launching Hortonworks Hadoop on the EC2 cluster (not EMR), transport of data from client data centre to AWS, Spark based processing of data, storage on S3 followed by the shutdown of the AWS cluster.
Industry: Banking
Solution Space: Big Data / Hadoop / Spark
Our customer in the banking space creates 1000’s of metrics as part of its batch customer credit score process. 1000’s of these metrics are based on credit bureau data. Our big data engineers migrated these credit bureau metric generation processes from mainframe to Spark on Hadoop. We reduced the processing time for 1.35 billion metrics from batch jobs spread across many days on their mainframe to one job for all customers completing in 23 minutes on Apache Spark.
Industry: Health Insurance
Solution Space: Machine Learning
Our customer in the health insurance space has pharmaceutical claims processing logic spread across multiple systems in base code – not rule management systems. How to ensure the consistency of these rules and spot anomalies? Our machine learning engineers built XGBoost based models to reverse engineer the customer’s pharmaceutical claim processing rules enabling the identification of anomalous claim results for human investigation.
Industry: Life Insurance
Solution Space: Big Data / Hadoop
We manage the on-premises Hortonworks Hadoop cluster for one of South Africa’s largest insurance/asset management organisations. Hortonworks HDP 3.1 with Active Directory integration and comprehensive data access management.
Industry: Banking
Solution Space: Big Data / Stream processing / Kafka / Camel
Our customer in the banking space has industry leading capabilities in pre-approved products for customers and batch processes to market these products to customers. Can products be marketed to customers based upon their actions? We implemented real-time stream processing based on Kafka and data enrichment, decisioning and actioning using a mix of Hortonworks and Apache Camel. Now our customer’s clients have relevant banking products offered to them based on their need at a moment in time.
Industry: Life Insurance
Solution Space: Big Data / Feature Store / Hadoop / Spark
Our customer in the insurance and asset management space wanted to do attrition prediction based upon 500+ complex features calculated on a daily basis across all customers – more than 3 billion metrics. The existing data warehouse infrastructure was unable to support this level of computation without impacting existing operational processes. Our big data engineers migrated the code to Spark on Hadoop and now these metrics are calculated in 35 minutes.