R Training – the R Academy of eoda
The R Academy of eoda is a modular course program for the R statistical language with regular events and training sessions. Our course instructors have been working with data analysis for over 10 years.
The course concept is aimed to train you to become an R expert. Depending on your needs and interests, you can choose from a variety of different course modules.
A strictly hierarchical structure does not exist, and the modules can be combined individually.
Our R training at universities, graduate centers as well as for companies are regularly evaluated and rated very well.
The advantages of the RAcademy
 courses in small groups of up to eight participants
 high application orientation by experienced trainers from practice
 sufficient exercise phases in which the learned can be implemented directly
 Highquality course materials and standard data sets
 optional R support following the RAcademy Training
 RAcademy Individual: Customized training at your site

First steps in R
 Structure of R, CRANMirror, different environments/editors of R, usage of the internal help functions, internet based help sources
 The basic concept and philosophy of R
 Programming language, object orientation in R, functions
 Types of variables
 Vectors, data frames, lists, …
 Import Data
 .txt, .csv, .xls, .savfiles, internet sources …
 Data management
 Assign variable attributes, creating variables, conditional transformations, selecting/filtering cases respectively variables
 Basic data analysis
 First descriptive statistics, i.e. means, deviations and other parameters, simple tables and graphics
Data Mining indicates a set of methods extracting knowledge from datasets without having presumptions about the data structure. Statistical und mathematical techniques are applied on data to expose inherent patterns. Generally the methods don´t need a high level of measurement (categorical, ordinal or metric scale) while they have the capability to release complex nonlinear data relations. Universal applications for Data Mining methods are forecastmodels, basket of goods analysis, target group analysis and more.
Methods which are part of the course:
 Regression and Classification Trees
 Random Forest
 Artificial Neural Networks
 Support Vector Machines
 KMeansClustering

Regression Analysis
Model, interpretation, possible problems
 Factor Analysis
Starting point, suitability, number of factors, number of extracting dimensions
 Cluster Analysis
Starting point and theory, different distance measures, interpretation, visualization
 Confirmatory factor analysis
 Multi Dimensional Scaling
 Shapley Value Regression
 Discriminant Analysis
 Bootstrapping
Introduction to time series methods
Foundations, seasonality, creating time series objects
• visualization of time series
• decomposition
Trend, seasonal and random effects; calculation of seasonally adjusted values
• test method
Stationarity and autocorrelation
• exponential smoothing
Modeling to HoltWinters, ETS and STL
• ARIMA models
Manufacture of stationarity about differentiation; definition of AR and MA terms; modeling
• forecasting
Seasonal and nonseasonal models; outlier treatment
• introduction to event history analysis
Basics of creating objects Survival
• Kaplan Meier model
Kumulativie hazard curves, logrank test
• Cox regression
Modeling, model checking, interpretation of the coefficients
To estimate the time span until a special incident occurs, survivalmodels are used. For example, the prognosis of machine breakdowns or etiopathology are possible application areas. The usage of survivalanalyses is taught on the basis of practical representatives. At the end of the course, every attendee should be able to exert the content for his own purpose. To get the best results, we recommend the participation in time series analysis I first.
The following methods are part of the content:
 Introduction to the fundamental terms of survicalanalyses
Episodes & censoring, survivorfunctions, hazardrate
 Introduction to the survivalanalysis on R
The survival package
 KaplanMeyerEstimator
Basic concept, Visualization, tabulation, group comparison, significance test
 CoxProportionalHazardsModel
Requirements and approvals, model configuration, the function coxph(), the tiesargument, interpretation of the result
 Timevarying variables & splitting of episodes
The function survSplit()
 Cox regression
Implementation in R, comparison of models, likelihoodratiotest, information criteria (BIC/AIC), appraised values
The assessment of advertising material used and its efficiency is still one of the major challenges of marketing. The course is focusing on the analysis of information from the web tracking.

Overview
 Graphic Packages
base, grid, ggplot2, lattice, plot
 HighLevel Graphic Elements
Bar Chart, Point Chart, Pie Chart, Histograms, dense graphs, Scatterplots
 LowLevel Graphic Elements
Geoms, Stats, Coord, Facet, Opts
Interactive graphics are a flexible and efficient way to analyze data and to present analysis results. Interactive graphic applications offer queries, selections, highlighting or the modification of graphics parameters. In the environment of R, there are various concepts that provide the possibility to create interactive graphics and applications directly out of R (IPlots, shiny [eoda shiny App]e.g.). The course presents an overview of the creation of interactive graphics with R and provides the tools to independently implement interactive visualizations in R.
As a discipline of Data Mining, Text Mining includes algorithm based analysis methods for the detection of structures and information from texts by using statistical and linguistic analysis tools. An example of application is the Web Mining, which can identify trends and customer requirements on websites and social media platforms. Text Mining is also used to forecast price trends and stock prices on the basis of news reports.
The course focuses on the application of the packets tm, RTextTools and OpenNLP and covers the following aspects:
• Overview of Text Mining
• Import of unstructured data, Web Scraping
• Structuring of texts (Pruning, Tokenization, Sentence Splitting, Normalization, Stemming, NGramme)
• Simple content analysis and association analysis
• Classification of documents with different methods(Support Vector Machines, Generalized Linear Model, Maximum Entropy, Supervised latent Dirichlet allocation, Boosting, Bootstrap aggregating, Random Forrests, Neural Networks, Regression Tree)
Statistical Controlling of incoming goods in production, and outgoing goods generate operating figures necessary to rate the quality of goods and products. The requirements to process quality controls systematically are methodical knowledge of statistics as well as of the right software. The open source statistical language R represents an interesting alternative.
The course conveys basic knowledge concerning R which can be used to manage previously processed statistical data. Before they are processed practically with R, the concepts of statistical testing will be introduced theoretically. Furthermore AQL standard values according to ISO 2859 and DIN ISO 3951 will be discussed. Additionally their operation modes and application will be presented related to practical applications. The application of the methods in R covers the most important functions in the area of statistical testing and the development of quality control plans. Essential contents from the area of inference statistics include:
 How can the optimal size of a random sample be determined?
 How can a decision for a specific testing method be made?
 How can operating figures be interpreted?
 Which degree of safety does the result of the random sample contain?
 How can the risks of deliverers and customers be arranged?
 Which discrepancies are acceptable?
 loops and conditionals
 „apply" funtions
 Writing own functions
 The S3 class system
 Parallelization
 Integration of other programming languages and operating systems
 Recoding of variables
 Data Aggregation
 Forming and analyze subsets of data and groups
 groupwise data operations (splitapplycombine)
 merging and sorting data
 data transformation
 comparing data
 identify and remove duplicates
The course explains the process from a loose collection of functions to a publishable package
 Package structure
 Release of packages
 Package documentation
 Namespaces and package dependencies
 testing
The course teaches the key aspects of the use of R in a business environment.
 Update of Packages and R
 Working in a closed environment
 Testing
 Versioning and collaboration
 Documentation and package creation
 R in Server/ClientArchitecture
Various initiatives have developed different concepts to cope with Big Data. For example different parser and packages have been developed to facilitate the handling of Big Data in R. The course will give an introduction to the following aspects
Data in scattered systems require different methods of analysis than notscattered data do. The principle of MapReduce is to divide problems into small tasks which can be solved on a small part of data. A typical example of application of data, which are saved in a HadoopSystem, is the counting of word in text files. Conventional techniques work through the whole text en bloc which can be really timeconsuming. MapReduce fragments the text into single knots and small blocks. The ReducePart reunites the results. Even complex search, compare, and analysis operations can be parallelized in this way and can therefore be calculated faster.
The course does convey the development of scripts for MapReduce jobs with concrete examples.
 Connection to data sources like data bases or file systems as Hadoop
 Linking to cloud environments like WindowsAzure or Amazon Web Services
 Chunking – Partitition of data into sub parts
 Parallelization of jobs for calculation
 Overview over different parser’s concepts (Revolution Analytics, Oracle R Enterprise, Renjin, …)
 Visualization of Big Data
The analysis of statistical data generate reports with various elements such as text, data, formulas, tables, and graphics . Interfaces between R and latex/html can bring the various contents in R together, and create a clear output which is available for presentation. In addition, it allows R to customize the reports dynamically on the basis of new data. In the method known under the term Reproducible Research the report items are updated without making any manual adjustments. After completion of the course, the participants should be able to create customized and automated reports.
Contents of the course :
• The user interface RStudio
• The packets " Sweave " and " knitR "
• Short introduction to latex , Markdown and HTML
• Formatting the Rissues with Chunk options
• Making static report templates in various output formats such as pdf and html
• Dynamic reports and automated adjustments
The combination of theoretical introductions, specific cases and practical exercises ensure the success of learning.
Onsite R Training
As an alternative to RAcademy we offer our trainings onsite. The inhouse training can be individually assembled and aligned to your needs. On request we also offer our trainings in English. Please contact us for an offer.
Universitätsplatz 12
34127 Kassel
Tel. +49 561 20272440
Fax +49 561 20272430
© eoda 2015