Shiny-App – Best practice: Inline Documentation

Shiny App – Best practice: Inline Documentation

If you are thinking: “I already know that”, these posts might be something for you as it is intended for users with pre-existing knowledge in R and Shiny. However, it assumes that you are already familiar with writing code in R, know the most widely used packages and have already built Shiny apps. With these posts, we’ll provide a framework for how to organize and optimize the development process for your Shiny apps.

In this post we’ll cover the importance of the inline documentation and also provide a hands-on example. Of course, all information can be found in our free Shiny Development Guide!

Shiny-App – Best practice: Inline Documentation

If you are thinking : “I already know that”, these posts might be something for you as it is intended for users with pre-existing knowledge in R and Shiny. However, it assumes that you are already familiar with writing code in R, know the most widely used packages and have already built Shiny apps. With these posts, we’ll provide a framework for how to organize and optimize the development process for your Shiny apps.

In this post we’ll cover the importance of the inline documentation and also provide a hands-on example. Of course, all information can be found in our free Shiny Development Guide!

Why and How?

Inline documentation covers the code of the app using inline comments. If the code is written with well thought out coding guidelines in use it is only necessary to cover a small portion of the code in this way. Inline documentation should be used for complex code chunks or code that was implemented to fix a specific problem that is not obvious afterwards. This kind of documentation allows the developers to understand why certain parts of code are implemented in the way they are and what the thought process was behind the specific code. Inline documentation should not duplicate code functionalities that are clearly obvious by writing the code.

The following example assumes that the penguins data set lies in a SQL database:

# Good

# flipper_bill_relation has to be calculated inside an additional mutate 
# because of dbplyr usage.
# Using avg_bill_length or avg_flipper_length inside the same summarize
# command they were created in to create another column would throw an error

penguins_raw %>% group_by(Species) %>% summarize(
avg_bill_length = mean(`Culmen Length (mm)`), avg_flipper_length = mean(`Flipper Length (mm)`)
) %>%
mutate(
flipper_bill_relation = avg_flipper_length / avg_bill_length - 1
) %>%
# collect first before dropping rows with NAs because drop_na can not be # translated to sql
collect() %>% drop_na()
Code language: R (r)
# Bad

# group data by Species and calculate the bill length and flipper length mean 
# as well as the relation of bill length and flipper length mean. Replace 0s 
# with NA.
penguins_raw %>% group_by(Species) %>% summarize(
avg_bill_length = mean(`Culmen Length (mm)`), avg_flipper_length = mean(`Flipper Length (mm)`)
) %>%
mutate(
flipper_bill_relation = avg_flipper_length / avg_bill_length - 1
) %>%
collect() %>% drop_na()
Code language: R (r)

10 Tidyverse functions that might save your day

10 Tidyverse functions that might save your day

In this blog post, we will present 10 Tidyverse functions that are often overlooked by beginners but have proven to be very useful in the right context. We will first describe a problem that we faced in practice in a similar form and then explain how Tidyverse helps us to solve this problem.

10 Tidyverse functions that might save your day

In this blog post, we will present 10 Tidyverse functions that are often overlooked by beginners but have proven to be very useful in the right context. We will first describe a problem that we faced in practice in a similar form and then explain how Tidyverse helps us to solve this problem.

For the preparation and analysis of data in R, the Tidyverse packages have become an industry standard in the last few years. At eoda, we use many features from the Tidyverse to increase the efficiency of our daily work.

Content

As expected, we start loading the necessary libraries:

library(tidyverse)

1. crossing

Problem:

For the first example, we consider a statistical application. Given two vectors of numerical means and standard deviations, we want to collect all the combination of values that occur in a data frame.

Solution:

The crossing() function from the tidyr package serves exactly this purpose. It takes an arbitrary number of vectors as input and builds all possible combinations of the occurring values:

means <- c(-1, 0, 1)
standard_deviations <- c(0.1, 0.5, 1)

mean_sd_combinations <- crossing(means, standard_deviations)
mean_sd_combinationsCode language: R (r)
meansstandard_deviations
-10.1
-10.5
-11.0
00.1
00.5
01.0
10.1
10.5
11.0

Bonus:

crossing() can take not only vectors but also data frames as input. In this case all combinations of the rows are formed.

This is especially useful if one of the data frames provides “global” information (in the following example population_data), which is valid for all observations, and the second data frame provides “local” information, which differs between observations or groups (in the example group_data).

population_data <- tibble(
  global_feature_1 = "e",
  global_feature_2 = 5,
)

population_dataCode language: R (r)
global_feature_1global_feature_2
e5
group_data <- tibble(
  group = 1:3,
  local_feature_1 = c(2, 5, 3),
  local_feature_2 = c(TRUE, FALSE, FALSE)
)

group_dataCode language: R (r)
grouplocal_feature_1local_feature_2
12TRUE
25FALSE
33FALSE

As a result of crossing(), we get a single data frame in which each row contains both the global and the group-specific values:

crossing(population_data, group_data)
global_feature_1global_feature_2grouplocal_feature_1local_feature_2
e512TRUE
e525FALSE
e533FALSE

2. rowwise

Problem:

We stay with the application example from Section 1. For each of the mean-standard deviation combinations, five random values (samples) of a standard normal distribution are to be drawn and added to the data frame in a new column.

Consequently, we need to act at the row level here: Each row of the Data Frame forms a related unit. The newly generated values of the first row are based solely on the remaining values of the first row.

Another peculiarity is that we add multiple entries per cell, not just a single one. In order for this to be compatible with the structure of a data frame, they must be combined into a list. Consequently, the new column is a list column – a column consisting of lists.

Solution:

One way is to use the map() family from the purrr package. The means and standard_deviations columns, to which the rnorm() function is applied, are referenced by the .x and .y placeholders:

random_samples_map <- mean_sd_combinations |> mutate(
  samples = map2(means, standard_deviations, ~ rnorm(n = 5, mean = .x, sd = .y))
)

random_samples_map |> head()
<em>## # A tibble: 6 × 3</em>
<em>##   means standard_deviations samples  </em>
<em>##   <dbl>               <dbl> <list>   </em>
<em>## 1    -1                 0.1 <dbl [5]></em>
<em>## 2    -1                 0.5 <dbl [5]></em>
<em>## 3    -1                 1   <dbl [5]></em>
<em>## 4     0                 0.1 <dbl [5]></em>
<em>## 5     0                 0.5 <dbl [5]></em>
<em>## 6     0                 1   <dbl [5]></em>Code language: R (r)

Each entry of the new samples column consists of a list of five drawn values from a standard normal distribution:

random_samples_map$samples[[1]]
<em>## [1] -1.0416796 -0.9907691 -0.9249944 -0.8859866 -1.0676741</em>Code language: R (r)

For many use cases the rowwise() function from the dplyr package offers a more user-friendly alternative. The column names means and standard_deviations can be used here directly in the call to the rnorm() function without the use of wildcards.

Since the new column consists of lists, the call to rnorm() must be made within list():

random_samples_map <- mean_sd_combinations |>
  dplyr::rowwise() |>
  mutate(samples = list(rnorm(n = 5, mean = means, sd = standard_deviations)))

random_samples_map$samples[[1]]
<em>## [1] -0.9437694 -0.9311953 -1.0259749 -1.0115392 -1.0614477</em>Code language: R (r)

Bonus:

When working with ‘list columns‘ the dplyr function nest_by() can be very useful, which unlike tidyr::nest() forms groups line by line.

As an example, we form a separate group for each cyl (cylinder) value from the mtcars dataset. All remaining mtcars columns are bundled into a new column consisting of data frames.

mtcars |> nest_by(cyl)
<em>## # A tibble: 3 × 2</em>
<em>## # Rowwise:  cyl</em>
<em>##     cyl                data</em>
<em>##   <dbl> <list<tibble[,10]>></em>
<em>## 1     4           [11 × 10]</em>
<em>## 2     6            [7 × 10]</em>
<em>## 3     8           [14 × 10]</em>Code language: R (r)

From this, we can add a new column with linear models of mpg (miles per gallon) as a function of hp (horse power).

In a last step, we extract from this the slope coefficients, one number per cylinder value. The result is a single data frame containing the original data, the model objects and the slope coefficients:

mtcars |>
  nest_by(cyl) |>
  mutate(model = list(lm(mpg ~ hp, data = data))) |>
  mutate(slope = coef(model)[2])
<em>## # A tibble: 3 × 4</em>
<em>## # Rowwise:  cyl</em>
<em>##     cyl                data model     slope</em>
<em>##   <dbl> <list<tibble[,10]>> <list>    <dbl></em>
<em>## 1     4           [11 × 10] <lm>   -0.113  </em>
<em>## 2     6            [7 × 10] <lm>   -0.00761</em>
<em>## 3     8           [14 × 10] <lm>   -0.0142</em>Code language: R (r)

3. pluck

Problem:

From the nested list l, we want to select the string “c” of the lowest level, i.e., the third value of element b in the first list element of a. In total, we have to extract a value from the fourth level of the list.

l <- list(a = list(c(1, 2, list(b = c("a", "b", "c")))))
l
<em>## $a</em>
<em>## $a[[1]]</em>
<em>## $a[[1]][[1]]</em>
<em>## [1] 1</em>
<em>## </em>
<em>## $a[[1]][[2]]</em>
<em>## [1] 2</em>
<em>## </em>
<em>## $a[[1]]$b</em>
<em>## [1] "a" "b" "c"</em>Code language: R (r)

Solution:

This is of course possible without additional packages, but still difficult to read:

l$a[[1]]$b[3]
<em>## [1] "c"</em>Code language: R (r)

pluck() from the purrr package, on the other hand, solves the task very smartly and easily understandable. The name or index of each level of the list is simply passed sequentially as an argument to the function:

l |> purrr::pluck("a", 1, "b", 3)
<em>## [1] "c"</em>Code language: R (r)

4. rownames_to_column & rowid_to_column

Problem 1:

The row names of a dataset should be written to the first column. As an example we choose the well known mtcars dataset. In this record the row names describe the model of the car, which should be added to a new model column:

mtcars |> head()
mpgcyldisphpdratwtqsecvsamgearcarb
Mazda RX421.061601103.902.62016.460144
Mazda RX4 Wag21.061601103.902.87517.020144
Datsun 71022.84108933.852.32018.611141
Hornet 4 Drive21.462581103.083.21519.441031
Hornet Sportabout18.783601753.153.44017.020032
Valiant18.162251052.763.46020.221031

Solution:

The tibble package provides the function rownames_to_column(). The parameter var can be passed a string with the desired new column name. The new column is automatically placed at the first position of the record.

mtcars_model <- mtcars |> tibble::rownames_to_column(var = "model")
mtcars_model |> head()Code language: R (r)
modelmpgcyldisphpdratwtqsecvsamgearcarb
Mazda RX421.061601103.902.62016.460144
Mazda RX4 Wag21.061601103.902.87517.020144
Datsun 71022.84108933.852.32018.611141
Hornet 4 Drive21.462581103.083.21519.441031
Hornet Sportabout18.783601753.153.44017.020032
Valiant18.162251052.763.46020.221031

Problem 2:

The second step is to add an index column that uniquely identifies each observation by an ID. To do this, we simply number the rows and write the row numbers in the new column.

Lösung:

An obvious solution creates a new column using mutate() in combination with nrow() or dplyr::row_number() and sets it to the first position using relocate():

mtcars_model |>
  <em># alternativ: mutate(index = row_number()) |></em>
  mutate(index = 1:nrow(mtcars)) |>
  relocate(index) |>
  head()Code language: R (r)
indexmodelmpgcyldisphpdratwtqsecvsamgearcarb
1Mazda RX421.061601103.902.62016.460144
2Mazda RX4 Wag21.061601103.902.87517.020144
3Datsun 71022.84108933.852.32018.611141
4Hornet 4 Drive21.462581103.083.21519.441031
5Hornet Sportabout18.783601753.153.44017.020032
6Valiant18.162251052.763.46020.221031

Again, the tibble package provides a more condensed solution. rowid_to_column() completes our task in one step. As before, the var argument can be used to specify the name of the new column:

mtcars_model |>
  tibble::rowid_to_column(var = "index") |>
  head()Code language: R (r)
indexmodelmpgcyldisphpdratwtqsecvsamgearcarb
1Mazda RX421.061601103.902.62016.460144
2Mazda RX4 Wag21.061601103.902.87517.020144
3Datsun 71022.84108933.852.32018.611141
4Hornet 4 Drive21.462581103.083.21519.441031
5Hornet Sportabout18.783601753.153.44017.020032
6Valiant18.162251052.763.46020.221031

5. parse_number

Solution:

In our daily work with data, we often encounter data sets that need to be cleaned up before they can be reused.

The following data set contains a column with products and another column with associated prices. However, the prices are included in a string without a fixed structure:

data_prices <- tibble(
  product = 1:3,
  costs = c("$10 -> expensive", "cheap: $2.50", "free, $0 !!")
)

data_pricesCode language: R (r)
productcosts
1$10 -> expensive
2cheap: $2.50
3free, $0 !!

The task now is to separate from the strings the numerical prices for each product.

Solution:

A working, but often inconvenient, solution is to use regular expressions. In this example, we look for the first match of at least one digit followed by optional period and decimal places. A disadvantage of this approach is that the result column is still of type character:

one_or_more_digits <- "\d+"
optional_dot <- "\.?"
optional_digits <- "\d*"

data_prices |> mutate(price = stringr::str_extract(
  string = costs,
  pattern = paste0(one_or_more_digits, optional_dot, optional_digits)
))
<em>## # A tibble: 3 × 3</em>
<em>##   product costs            price</em>
<em>##     <int> <chr>            <chr></em>
<em>## 1       1 $10 -> expensive 10   </em>
<em>## 2       2 cheap: $2.50     2.50 </em>
<em>## 3       3 free, $0 !!      0</em>Code language: R (r)

However, there is a more comfortable way: The readr package, which is usually used for data import, provides the helper function parse_number(). This scans a vector of strings for the first number and extracts it from its context. Possible decimal places are automatically taken into account.

The new price column in this case belongs directly to the double data type:

data_prices |> mutate(price = parse_number(costs))Code language: R (r)
productcostsprice
1$10 -> expensive10.0
2cheap: $2.502.5
3free, $0 !!0.0

6. fct_lump_*

Problem:

In this example, we work with the babynames dataset from the R package of the same name, which lists the most popular baby names in the US over several decades. The column n indicates the absolute frequency of the name within a year:

babynames::babynames |> head()Code language: R (r)
yearsexnamenprop
1880FMary70650.0723836
1880FAnna26040.0266790
1880FEmma20030.0205215
1880FElizabeth19390.0198658
1880FMinnie17460.0178884
1880FMargaret15780.0161672

We are interested in what the most frequent letters are for girls’ names to end in the year 2000:

names_2000 <- babynames::babynames |> filter(year == 2000)

last_letters_females <- names_2000 |>
  mutate(last_letter = stringr::str_sub(name, start = -1, end = -1)) |>
  filter(sex == "F") |>
  count(last_letter, wt = n, name = "num_babies", sort = TRUE)

last_letters_females |> head(10)Code language: R (r)
last_letternum_babies
a675963
e318399
n248450
y246324
h117324
l56623
r50769
i42591
s32603
t9796

As expected, some letters are in last position much more often than others. For overview purposes, all letters with low frequency should be grouped into a common Other category.

Solution:

The forcats package helps us with this. The fct_lump_*() family aggregates rarer values of a factor (or here character) variable according to various criteria:

  • fct_lump_n() keeps the n most frequent values and merges all other values into a new category.
  • fct_lump_min() summarizes all values which occur less often than a given absolute frequency.
  • fct_lump_prop() summarizes all values which occur less often than a given relative frequency (proportion between 0 and 1).
  • fct_lump_lowfreq() automatically summarizes the rarest values so that the aggregated Other category still has the lowest frequency among the new categories

In our example, we use fct_lump_n() and keep the most common last five letters:

last_letters_females_lumped <- last_letters_females |>
  mutate(last_letter = factor(last_letter) |> fct_lump_n(
    n = 5, w = num_babies, other_level = "Other"
  )) |>
  count(
    last_letter,
    wt = num_babies, name = "num_babies", sort = TRUE
  )

last_letters_females_lumpedCode language: R (r)
last_letternum_babies
a675963
e318399
n248450
y246324
Other208650
h117324

The parameter w (for weight) can optionally specify a column whose values are summed up to determine the frequency. This is useful if, as in the example above, each letter occurs in only one line and the corresponding frequencies have already been calculated. The parameter is not needed if the frequencies have not yet been calculated and each letter would be duplicated n times in the last_letter column.

7. fct_reorder + geom_col

Problem:

Wir bleiben auch für dieses Beispiel bei dem babynames Datensatz und visualisieren die Anzahl der sechs häufigsten Mädchennamen in einem Balkendiagramm mit geom_col():

We also stick with the babynames dataset for this example and visualize the number of the six most common girls’ names in a bar chart using geom_col():

plot_color <- "#8bac37"

top_names_females <- names_2000 |>
  filter(sex == "F") |>
  slice_max(n, n = 6)

top_names_females |>
  ggplot(aes(n, name)) +
  geom_col(fill = plot_color) +
  labs(
    title = "Die 6 häufigsten Babynamen für Mädchen im Jahr 2000",
    x = "Häufigkeit", y = NULL,
  ) +
  theme_light() +
  theme(plot.title = element_text(hjust = 0.5))Code language: R (r)

The names are not ordered along the y-axis according to their frequency!!

Solution:

To achieve this, we reorder the name column according to its frequency (of column n).

This case occurs so often in practice that I use geom_col() almost entirely in combination with fct_reorder() from the forcats package:

top_names_females |>
  mutate(name = fct_reorder(name, n)) |>
  ggplot(aes(n, name)) +
  geom_col(fill = plot_color) +
  labs(
    title = "Die 6 häufigsten Babynamen für Mädchen im Jahr 2000",
    x = "Häufigkeit", y = NULL,
  ) +
  theme_light() +
  theme(plot.title = element_text(hjust = 0.5))Code language: R (r)

Bonus:

The above procedure no longer works as easily if a separate bar chart is to be plotted in descending frequency for each value of an additional factor variable. As an example, we now additionally consider the most frequent boy names:

top_names <- names_2000 |>
  group_by(sex) |>
  slice_max(n, n = 6)Code language: R (r)

With fct_reorder(), the bars in each subplot are always ordered according to their frequency in the entire data set (and not just within each value of the sex variable).

The tidytext package, which is primarily used to analyze text data, saves us at this point.

The auxiliary functions reorder_within() and scale_y_reordered() do exactly the job and sort the values of the factor variables within each subplot:

top_names |>
  mutate(name = tidytext::reorder_within(name, by = n, within = sex)) |>
  ggplot(aes(n, name)) +
  geom_col(fill = plot_color) +
  labs(
    title = "Die 6 häufigsten Babynamen für Mädchen und Jungs im Jahr 2000",
    x = "Häufigkeit", y = NULL,
  ) +
  facet_wrap(facets = vars(sex), scales = "free_y") +
  tidytext::scale_y_reordered() +
  theme_light() +
  theme(plot.title = element_text(hjust = 0.5))Code language: R (r)

8. separate & separate_rows

Problem 1: 

The following dataset should represent the results of different international soccer matches:

data_games <- tibble(
  country = c("Germany", "France", "Spain"),
  game = c("England - win", "Brazil - loss", "Portugal - tie")
)

data_gamesCode language: R (r)
countrygame
GermanyEngland – win
FranceBrazil – loss
SpainPortugal – tie

However, the game column includes two different types of information: the opponent as well as the result.

Solution:

To make the data frame tidy, we split the game column into two columns using the separate() function from the tidyr package:

data_games |> separate(col = game, into = c("opponent", "result"))Code language: R (r)
countryopponentresult
GermanyEnglandwin
FranceBrazilloss
SpainPortugaltie

Problem 2:

A similar problem occurs when a column contains two pieces of information of the same type in each row. The opponent column now includes only opposing teams, but several per row:

data_opponents <- tibble(
  country = c("Germany", "France", "Spain"),
  opponent = c("England, Switzerland", "Brazil, Denmark", "Portugal, Argentina")
)

data_opponentsCode language: R (r)
countryopponent
GermanyEngland, Switzerland
FranceBrazil, Denmark
SpainPortugal, Argentina

In this case, the desired output does not contain more columns, but more rows, one for each opponent.

Solution:

separate_rows() splits each row of the opponent column into multiple rows, the corresponding country values are duplicated instead:

data_opponents |> separate_rows(opponent)Code language: R (r)
countryopponent
GermanyEngland
GermanySwitzerland
FranceBrazil
FranceDenmark
SpainPortugal
SpainArgentina

9. str_flatten_comma

Problem:

A vector of strings is to be combined into a single string. All entries are separated by a comma, only the last two are to be connected by the connecting word “and”.

animals <- c("cat", "dog", "mouse", "elephant")
animals
<em>## [1] "cat"      "dog"      "mouse"    "elephant"</em>Code language: R (r)

Solution:

Without the stringr package, two calls to paste() are required:

  1. First, all entries except the last one are joined by a comma to form a single string.
  2. Then the result from step 1 is concatenated with the last vector entry.
paste(animals[-1], collapse = ", ") |> paste(animals[length(animals)], sep = " and ")
<em>## [1] "dog, mouse, elephant and elephant"</em>Code language: R (r)

The stringr package provides its own function str_flatten_comma() with the very useful last parameter:

str_flatten_comma(animals, last = " and ")
<em>## [1] "cat, dog, mouse and elephant"</em>Code language: R (r)

10. arrange + distinct

Problem:

The final example is inspired by work on a recent project of eoda. We have a dataset with two columns, the first column (group) contains an indicator for the group membership of each observation. Within each group, only a single row should be kept: The one with the highest numerical value in the second (value) column:

set.seed(123)

data_group_value <- tibble(
  group = c(1, 3, 2, 1, 1, 2, 3, 1),
  value = sample(1:100, size = 8, replace = FALSE)
)

data_group_valueCode language: R (r)
groupvalue
131
379
251
114
167
242
350
143

Solution:

One possible approach is to use group_by() and slice_max() together:

data_group_value |>
  group_by(group) |>
  slice_max(value, n = 1)Code language: R (r)
groupvalue
167
251
379

The disadvantage here is that for large datasets, a large number of groups may be formed, which reduces the efficiency of the calculation. In addition, this approach does not lead to the desired result for duplicates, since slice_max() selects all observations with the maximum value:

data_group_value_duplicates <- data_group_value |>
  mutate(
    value = case_when(
      group == 1 ~ 20L,
      TRUE ~ value
    )
  )

data_group_value_duplicates |>
  group_by(group) |>
  slice_max(value, n = 1)Code language: R (r)
groupvalue
120
120
120
120
251
379

So in this case an additional call to slice(1) would be required to really keep only a single row per group.

A more efficient solution resorts to the dplyr combination of arrange() and distinct(). First, all rows within each group are sorted in descending order by their value values. The maximum value to be selected is therefore always at the first position within each group.

In the second step a call to distinct() is sufficient, because this function always keeps the first occurring value in case of duplicates and removes all others from the column

data_group_value_duplicates |>
  arrange(group, desc(value)) |>
  distinct(group, .keep_all = TRUE)Code language: R (r)
groupvalue
120
251
379

Conclusion

In this article we have illustrated the usefulness of selected Tidyverse functions by means of various examples.

Some problems could be solved by other means as well

– but only with greater effort

Python, R & Shiny

Our trainings pave the way for your next steps. Machine Learning, Data Visualization, Time Series Analytics or Shiny:
Find the right course for your specific needs with us.

It’s official – Santa Claus is Data Scientist

It’s official – Santa Claus is Data Scientist

The family has all come together, peaceful music is playing and the children’s eyes are shining brightly as the tree is lighted. It’s Christmas Eve and Santa’s big performance is just around the corner.

Santa Claus is a Data Scientist.

It’s official – Santa Claus is Data Scientist

The family has all come together, peaceful music is playing and the children’s eyes are shining brightly as the tree is lighted. It’s Christmas Eve and Santa’s big performance is just around the corner.

Santa Claus is a Data Scientist.

Automating AWS Organizations with Terraform

Dear Data Scientists – make your job easier!

rstudio::global 2021 – Review of 24 hours around Data Science and R

Data Science Trends 2021

Data Science Framework – YUNA elements now available for download

27

Ei Gude! – Data science courses with R in Frankfurt

Unlock the potential of data science with the free R programming language for advanced analytics and data visualization. In our popular R trainings you will learn how to implement data science in your company.

With over 1,500 satisfied participants, eoda’s R courses are among the leading courses in the German-speaking world. This time we bring our popular courses “Introduction to R” and “Introduction to Machine Learning with R” to Frankfurt.

12th – 13th November 2019 | Introduction to R

With practical tips and exercises, this introductory course serves as a basis for the further use of R in individual applications. In order to be able to lay the foundation for independent work, the focus lies on teaching the logic and terminology of R. This course is aimed at beginners and therefore does not require any previous in-depth knowledge.

14th – 15th November 2019 | Machine Learning with R

In this course you will gain an insight into machine learning algorithms. In addition to developing your own models, you will also learn what challenges you face and how to master them with R. Use machine learning and data mining algorithms to develop artificial intelligence applications based on data.

By means of practical examples and exercises, this course teaches you the skills to carry out machine learning procedures in R independently.

 

These two introductory courses can also be booked as a bundle at an attractive price.

Save one of the coveted places and become a data science expert in November with R! We look forward to your registration. 

Reasons why data science projects are not always successfull