R Training – the R Academy of eoda
The R Academy of eoda is a modular course program for the R statistical language with regular events and training sessions. Our course instructors have been working with data analysis for over 10 years.
The course concept is aimed to train you to become an R expert. Depending on your needs and interests, you can choose from a variety of different course modules. A strictly hierarchical structure does not exist, and the modules can be combined individually.
Our R training at universities, graduate centers as well as for companies are regularly evaluated and rated very well. A selection of our references:
First steps in R
- Structure of R, CRAN-Mirror, different environments/editors of R, usage of the internal help functions, internet based help sources
- The basic concept and philosophy of R
- Programming language, object orientation in R, functions
- Types of variables
- Vectors, data frames, lists, …
- Import Data
- .txt-, .csv-, .xls-, .sav-files, internet sources …
- Data management
- Assign variable attributes, creating variables, conditional transformations, selecting/filtering cases respectively variables
- Basic data analysis
- First descriptive statistics, i.e. means, deviations and other parameters, simple tables and graphics
Data Mining indicates a set of methods extracting knowledge from datasets without having presumptions about the data structure. Statistical und mathematical techniques are applied on data to expose inherent patterns. Generally the methods don´t need a high level of measurement (categorical, ordinal or metric scale) while they have the capability to release complex non-linear data relations. Universal applications for Data Mining methods are forecast-models, basket of goods analysis, target group analysis and more.
Methods which are part of the course:
- Regression- and Classification Trees
- Random Forest
- Artificial Neural Networks
- Support Vector Machines
With the help of hypothesis testing the aim of the course is to investigate whether there are differences or relationships between different variables and whether they are randomly or systematically. Depending on the data format different testing methods are used. This course will presen the main methods.
- Factor Analysis
- Cluster Analysis
Introduction to time series methods
Foundations, seasonality, creating time series objects
• visualization of time series
Trend, seasonal and random effects; calculation of seasonally adjusted values
• test method
Stationarity and autocorrelation
• exponential smoothing
Modeling to Holt-Winters, ETS and STL
• ARIMA models
Manufacture of stationarity about differentiation; definition of AR and MA terms; modeling
Seasonal and non-seasonal models; outlier treatment
• introduction to event history analysis
Basics of creating objects Survival
• Kaplan Meier model
Kumulativie hazard curves, log-rank test
• Cox regression
Modeling, model checking, interpretation of the coefficients
An overview of R Graphics
- Functions for producing standard plots
- Ggplot2 and lattice
Interactive graphics are a flexible and efficient way to analyze data and to present analysis results. Interactive graphic applications offer queries, selections, highlighting or the modification of graphics parameters. In the environment of R, there are various concepts that provide the possibility to create interactive graphics and applications directly out of R (IPlots, shiny [eoda shiny App]e.g.). The course presents an overview of the creation of interactive graphics with R and provides the tools to independently implement interactive visualizations in R.
As a discipline of Data Mining, Text Mining includes algorithm based analysis methods for the detection of structures and information from texts by using statistical and linguistic analysis tools. An example of application is the Web Mining, which can identify trends and customer requirements on websites and social media platforms. Text Mining is also used to forecast price trends and stock prices on the basis of news reports.
• Overview of Text Mining
• Import of unstructured data, Web Scraping
• Structuring of texts (Pruning, Tokenization, Sentence Splitting, Normalization, Stemming, N-Gramme)
• Simple content analysis and association analysis
• Classification of documents with different methods(Support Vector Machines, Generalized Linear Model, Maximum Entropy, Supervised latent Dirichlet allocation, Boosting, Bootstrap aggregating, Random Forrests, Neural Networks, Regression Tree)
Statistical Controlling of incoming goods in production, and outgoing goods generate operating figures necessary to rate the quality of goods and products. The requirements to process quality controls systematically are methodical knowledge of statistics as well as of the right software. The open source statistical language R represents an interesting alternative.
The course conveys basic knowledge concerning R which can be used to manage previously processed statistical data. Before they are processed practically with R, the concepts of statistical testing will be introduced theoretically. Furthermore AQL standard values according to ISO 2859 and DIN ISO 3951 will be discussed. Additionally their operation modes and application will be presented related to practical applications. The application of the methods in R covers the most important functions in the area of statistical testing and the development of quality control plans. Essential contents from the area of inference statistics include:
- How can the optimal size of a random sample be determined?
- How can a decision for a specific testing method be made?
- How can operating figures be interpreted?
- Which degree of safety does the result of the random sample contain?
- How can the risks of deliverers and customers be arranged?
- Which discrepancies are acceptable?
- loops and conditionals
- „apply" funtions
- Writing own functions
- The S3 class system
- Integration of other programming languages and operating systems
The combination of extensive statistics libraries and well founded programming concepts makes R to a powerful programming language for all tasks related to Data Mining, Predictive Analytics and many more.
This continuative course is designed to deepen the participant’s programming knowledge. The course’s goal is to enable the participants to program faster, wider and on a higher level of quality in R to ensure high quality programming solutions.
The following topics will be treated in the course:
Exceptions, calling, evaluation, parsing
- Exceptions/ Error Handling
try-catch, debug, browser, traceback
- Performance Optimization
profiling, memory management, data.table, parallel processing (ff, foreach, plyr)
- Class Structures in R
class systems (S3 and S4), reference class
- Package Development
filesystem, documentation, testing, Namesspace
Various initiatives have developed different concepts to cope with Big Data. For example different parser and packages have been developed to facilitate the handling of Big Data in R. The course will give an introduction to the following aspects:
- Connection to data sources like data bases or file systems as Hadoop
- Linking to cloud environments like WindowsAzure or Amazon Web Services
- Chunking – Partitition of data into sub parts
- Parallelization of jobs for calculation
- Overview over different parser’s concepts (Revolution Analytics, Oracle R Enterprise, Renjin, …)
- Visualization of Big Data
Data in scattered systems require different methods of analysis than not-scattered data do. The principle of MapReduce is to divide problems into small tasks which can be solved on a small part of data. A typical example of application of data, which are saved in a Hadoop-System, is the counting of word in text files. Conventional techniques work through the whole text en bloc which can be really time-consuming. MapReduce fragments the text into single knots and small blocks. The Reduce-Part reunites the results. Even complex search-, compare-, and analysis operations can be parallelized in this way and can therefore be calculated faster. The course does convey the development of scripts for MapReduce jobs with concrete examples.
The analysis of statistical data generate reports with various elements such as text, data, formulas, tables, and graphics . Interfaces between R and latex/html can bring the various contents in R together, and create a clear output which is available for presentation. In addition, it allows R to customize the reports dynamically on the basis of new data. In the method known under the term Reproducible Research the report items are updated without making any manual adjustments. After completion of the course, the participants should be able to create customized and automated reports.
Contents of the course :
• The user interface R-Studio
• The packets " Sweave " and " knitR "
• Short introduction to latex , Markdown and HTML
• Formatting the R-issues with Chunk options
• Making static report templates in various output formats such as pdf and html
• Dynamic reports and automated adjustments
The combination of theoretical introductions, specific cases and practical exercises ensure the success of learning.
Onsite R Training
As an alternative to R-Academy we offer our trainings onsite. The in-house training can be individually assembled and aligned to your needs. On request we also offer our trainings in English. Please contact us for an offer.
Tel. +49(0) 561 202724 40
Fax +49(0) 561 202724 30
© eoda 2014