Fraud and Anomaly Detection

2020-09-18T03:57:24+00:00Categories: Level 2, Data Science Curriculum Electives, Fraud and Security, R, Dr Eugene Dubossarsky, Financial Risk, All Academy Courses|Tags: , |

This course presents statistical, computational and machine-learning techniques for predictive detection of fraud and security breaches. These methods are shown in the context of use cases for their application, and include the extraction of business rules and a framework for the inter-operation of human, rule-based, predictive and outlier-detection methods. Methods presented include predictive tools that do not rely on explicit fraud labels, as well as a range of outlier-detection techniques including unsupervised learning methods, notably the powerful random-forest algorithm, which can be used for all supervised and unsupervised applications, as well as cluster analysis, visualisation and fraud detection based on Benford’s law. The course will also cover the analysis and visualisation of social-network data. A basic knowledge of R and predictive analytics is advantageous.

Stars, Flakes, Vaults and the Sins of Denormalisation

2020-09-18T04:23:11+00:00Categories: Data Governance Level 2, Innovation & Tech (CTO) Curriculum Electives, Data Governance Curriculum Electives, Innovation & Tech (CTO) Level 2, Stephen Brobst, Data Engineering Curriculum, Data Management, AI Engineering Curriculum, Data Engineering Level 1, AI Engineering Level 1, All Academy Courses|Tags: , , , |

Providing both performance and flexibility are often seen as contradictory goals in designing large scale data implementations. In this talk we will discuss techniques for denormalisation and provide a framework for understanding the performance and flexibility implications of various design options. We will examine a variety of logical and physical design approaches and evaluate the trade offs between them. Specific recommendations are made for guiding the translation from a normalised logical data model to an engineered-for-performance physical data model. The role of dimensional modeling and various physical design approaches are discussed in detail. Best practices in the use of surrogate keys is also discussed. The focus is on understanding the benefit (or not) of various denormalisation approaches commonly taken in analytic database designs.

Best Practices in Enterprise Information Management

2019-10-24T04:45:22+00:00Categories: Data Culture Level 1, Data Culture Curriculum, Innovation & Tech (CTO) Curriculum Electives, Data Governance Curriculum, Fraud and Security, Stephen Brobst, Executive Curriculum, Data Engineering Curriculum, Data Governance Level 1, Data Management, Executive Level 2, Big Data, Data Engineering Level 1, All Academy Courses, Innovation & Tech (CTO) Level 3|Tags: , , , , , |

The effective management of enterprise information for analytics deployment requires best practices in the areas of people, processes, and technology. In this talk we will share both successful and unsuccessful practices in these areas. The scope of this workshop will involve five key areas of enterprise information management: (1) metadata management, (2) data quality management, (3) data security and privacy, (4) master data management, and (5) data integration.

Agile Insights

2019-10-25T10:26:46+00:00Categories: AI Engineering Curriculum Electives, Data Culture Electives, Data Governance Curriculum, Introductory, Executive Curriculum, Innovation & Tech (CTO) Curriculum, Alexander Heidl, All Academy Courses|Tags: , , , , |

This course presents a process and methods for an agile analytics delivery. Agile Insights reflects the capabilities required by any organization to develop insights from data and validating potential business value.Content presented describes the process, how it is executed and how it can be deployed as a standard process inside an organization. The course will also share best practices, highlight potential tripwires to watch out for, as well as roles and resources required.

Deep Learning and AI

2020-09-18T04:37:12+00:00Categories: Keras, Tensorflow, Level 2, Data Science Curriculum, Python, Data Engineering Curriculum, Dr Eugene Dubossarsky, All Academy Courses|Tags: , |

This course is an introduction to the highly celebrated area of Neural Networks, popularised as “deep learning” and “AI”. The course will cover the key concepts underlying neural network technology, as well as the unique capabilities of a number of advanced deep learning technologies, including Convolutional Neural Nets for image recognition, recurrent neural nets for time series and text modelling, and new artificial intelligence techniques including Generative Adversarial Networks and Reinforcement Learning. Practical exercises will present these methods in some of the most popular Deep Learning packages available in Python, including Keras and Tensorflow. Trainees are expected to be familiar with the basics of machine learning from the Fundamentals course, as well as the python language.

Text and Language Analytics

2020-03-16T00:52:52+00:00Categories: AI Engineering Curriculum Electives, Level 2, Data Science Curriculum Electives, R, R Electives, Dr Eugene Dubossarsky, All Academy Courses|Tags: , |

Text analytics is a crucial skill set in nearly all contexts where data science has an impact, whether that be customer analytics, fraud detection, automation or fintech. In this course, you will learn a toolbox of skills and techniques, starting from effective data preparation and stretching right through to advanced modelling with deep-learning and neural-network approaches such as word2vec.

Forecasting and Trend Analysis

2020-09-23T23:06:56+00:00Categories: AI Engineering Curriculum Electives, Data Science Curriculum Electives, R, Dr Eugene Dubossarsky, All Academy Courses|Tags: , |

This course is an intuitive introduction to forecasting and analysis of time-series data. We will review a range of standard forecasting methods, including ARIMA and exponential smoothing, along with standard means of measuring forecast error and benchmarking with naive forecasts, and standard pre-processing/de-trending methods such as differencing and missing value imputation. Other topics will include trend/seasonality/noise decomposition, autocorrelation, visualisation of time series, and forecasting with uncertainty.

Advanced Python 1

2019-10-18T03:24:37+00:00Categories: Level 2, Data Science Curriculum, Python, Dr Eugene Dubossarsky, AI Engineering Curriculum, All Academy Courses|Tags: , |

This class builds on the introductory Python class. Jupyter Notebook advanced use and customisation is covered as well as configuring multiple environments and kernels. The Numpy package is introduced for working with arrays and matrices and a deeper coverage of Pandas data analysis and manipulation methods is provided including working with time series data. Data exploration and advanced visualisations are taught using the Plotly and Seaborne libraries.

Advanced Python 2

2019-10-18T03:25:47+00:00Categories: Data Science Curriculum Electives, Python, Level 3, Dr Eugene Dubossarsky, AI Engineering Curriculum, All Academy Courses|Tags: , |

This class builds on the introductory Python class. Jupyter Notebook advanced use and customisation is covered as well as configuring multiple environments and kernels. The Numpy package is introduced for working with arrays and matrices and a deeper coverage of Pandas data analysis and manipulation methods is provided including working with time series data. Data exploration and advanced visualisations are taught using the Plotly and Seaborne libraries.

Advanced R 1

2020-07-10T07:50:39+00:00Categories: Level 2, Data Science Curriculum, tidyverse, Shiny, R, Dr Eugene Dubossarsky, AI Engineering Curriculum, All Academy Courses|Tags: , |

This class builds on “Intro to R (+data visualisation)” by providing students with powerful, modern R tools including pipes, the tidyverse, and many other packages that make coding for data analysis easier, more intuitive and more readable. The course will also provide a deeper view of functional programming in R, which also allows cleaner and more powerful coding, as well as R Markdown, R Notebooks, and the shiny package for interactive documentation, browser-based dashboards and GUIs for R code.

Go to Top