Data Visualisation and Communication

2020-09-17T08:23:30+00:00Categories: Executive Curriculum Adv Electives, Data Culture Electives, Data Science Curriculum, Introductory, Data Visualisation, Dr Eugene Dubossarsky, All Academy Courses|Tags: , , , |

This course prepares data analytics professionals to communicate analytics results to business audiences, in a business context while being mindful of the skills, incentives, priorities and psychology of the audience. It also equips analysts [...]

Intro to R (+ data visualisation)

2020-09-18T06:36:52+00:00Categories: Level 1, Data Culture Electives, Impact, Data Science Curriculum, R, Data Visualisation, Data Engineering Curriculum, Dr Eugene Dubossarsky, AI Engineering Curriculum, All Academy Courses|Tags: , |

This R training course will introduce you to the R programming language, teaching you to create functions and customise code so you can manipulate data and begin to use R self-sufficiently in your work. R is the world’s most popular data mining and statistics package. It’s also free, and easy to use, with a range of intuitive graphical interfaces.

Intro to Python for Data Analysis

2020-02-14T02:07:57+00:00Categories: Level 1, Data Culture Electives, Data Science Curriculum, Python, Data Engineering Curriculum, Dr Eugene Dubossarsky, AI Engineering Curriculum, All Academy Courses|Tags: , |

Python is a high-level, general-purpose language used by a thriving community of millions. Data-science teams often use it in their production environments and analysis pipelines, and it’s the tool of choice for elite data-mining competition winners and deep-learning innovations. This course provides a foundation for using Python in exploratory data analysis and visualisation, and as a stepping stone to machine learning.

Data Transformation and Analysis Using Apache Spark

2020-09-18T03:24:51+00:00Categories: Jeffrey Aven, Level 1, Apache Spark Training with Jeffrey Aven, Experienced Analytics Instructor + Big Data Author, Data Science Curriculum Electives, Data Governance Curriculum Electives, Apache Spark, Data Engineering Curriculum, All Academy Courses|Tags: , |

With big data expert and author Jeffrey Aven. Learn how to develop applications using Apache Spark. The first module in the “Big Data Development Using Apache Spark” series, this course provides a detailed overview of the spark runtime and application architecture, processing patterns, functional programming using Python, fundamental API concepts, basic programming skills and deep dives into additional constructs including broadcast variables, accumulators, and storage and lineage options. Attendees will learn to understand the Apache Spark framework and runtime architecture, fundamentals of programming for Spark, gain mastery of basic transformations, actions, and operations, and be prepared for advanced topics in Spark including streaming and machine learning.

Stream and Event Processing using Apache Spark

2020-09-18T03:44:58+00:00Categories: Jeffrey Aven, Apache Spark Training with Jeffrey Aven, Experienced Analytics Instructor + Big Data Author, Level 2, Data Science Curriculum Electives, Apache Spark, Data Engineering Curriculum, All Academy Courses|Tags: , |

The second module in the “Big Data Development Using Apache Spark” series, this course provides the Spark streaming knowledge needed to develop real-time, event-driven or event-oriented processing applications using Apache Spark. It covers using Spark with NoSQL systems and popular messaging platforms like Apache Kafka and Amazon Kinesis. It covers the Spark streaming architecture in depth, and uses practical hands-on exercises to reinforce the use of transformations and output operations, as well as more advanced stream-processing patterns. With big data expert and author Jeffrey Aven.

Advanced Analytics Using Apache Spark

2020-09-18T03:49:00+00:00Categories: Jeffrey Aven, Apache Spark Training with Jeffrey Aven, Experienced Analytics Instructor + Big Data Author, Data Science Curriculum Electives, Apache Spark, Level 3, R Electives, AI Engineering Curriculum, All Academy Courses|Tags: , |

With big data expert and author Jeffrey Aven. The third module in the “Big Data Development Using Apache Spark” series, this course provides the practical knowledge needed to perform statistical, machine learning and graph analysis operations at scale using Apache Spark. It enables data scientists and statisticians with experience in other frameworks to extend their knowledge to the Spark runtime environment with its specific APIs and libraries designed to implement machine learning and statistical analysis in a distributed and scalable processing environment.

Fraud and Anomaly Detection

2020-09-18T03:57:24+00:00Categories: Level 2, Data Science Curriculum Electives, Fraud and Security, R, Dr Eugene Dubossarsky, Financial Risk, All Academy Courses|Tags: , |

This course presents statistical, computational and machine-learning techniques for predictive detection of fraud and security breaches. These methods are shown in the context of use cases for their application, and include the extraction of business rules and a framework for the inter-operation of human, rule-based, predictive and outlier-detection methods. Methods presented include predictive tools that do not rely on explicit fraud labels, as well as a range of outlier-detection techniques including unsupervised learning methods, notably the powerful random-forest algorithm, which can be used for all supervised and unsupervised applications, as well as cluster analysis, visualisation and fraud detection based on Benford’s law. The course will also cover the analysis and visualisation of social-network data. A basic knowledge of R and predictive analytics is advantageous.

Stars, Flakes, Vaults and the Sins of Denormalisation

2020-09-18T04:23:11+00:00Categories: Data Governance Level 2, Innovation & Tech (CTO) Curriculum Electives, Data Governance Curriculum Electives, Innovation & Tech (CTO) Level 2, Stephen Brobst, Data Engineering Curriculum, Data Management, AI Engineering Curriculum, Data Engineering Level 1, AI Engineering Level 1, All Academy Courses|Tags: , , , |

Providing both performance and flexibility are often seen as contradictory goals in designing large scale data implementations. In this talk we will discuss techniques for denormalisation and provide a framework for understanding the performance and flexibility implications of various design options. We will examine a variety of logical and physical design approaches and evaluate the trade offs between them. Specific recommendations are made for guiding the translation from a normalised logical data model to an engineered-for-performance physical data model. The role of dimensional modeling and various physical design approaches are discussed in detail. Best practices in the use of surrogate keys is also discussed. The focus is on understanding the benefit (or not) of various denormalisation approaches commonly taken in analytic database designs.

Go to Top