-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Landscape optimization #60
Comments
I think it would make sense to have it in NLMR (to generate neutral landscapes that have a certain configuration/composition). |
I worked on something similar for rasters, so maybe we wouldn't need the GenAlg dependency |
True, for shar, or? But here or in another package? |
It was in one of the early early versions of shar, yes. I ditched it at some point, but I'll try to find the old code. Would definitely need some modifications, but maybe nice as a starting point |
Good idea. I used it for one project about a year ago, but finally I decided on using NLMR. I even wrote an R wrapper - https://gitlab.com/nowosad/landscapegenerator. I think it should be either (1) a new package or (2) a part of NLMR. |
Okay, this is with what I came up with. Currently, only landscape level metrics are possible. I'm not sure if patch or class level metrics make any sense. It's possible to either provide a starting landscape or to start with a completly random landscape. In this case, the dimensions (nrow,ncol,resolution) and the number of classes must be provided. The configuration and compostion are optimized at the moment, however, not the diversity (in other words, number of classes). The algorithm picks a random cell and assigns a "new" value to it. By "new" I mean a value which is already present in the landscape, but not the current value of the cell. After each assignment, it checks if the landscape characteristics approached the target values. If yes, the change is kept. We could also somehow think about introducing new classes, but this makes everything way more complicated on a conceptual note. What would be the rule for "new" classes? How often should this happen? What whould be the actual value? And so on... I will share the code in a second... |
library(raster)
#> Lade nötiges Paket: sp
library(landscapemetrics)
library(NLMR)
# @param landscape Starting landscape. If NULL, a random landscape will be created.
# @param size Only needed if landscape is NULL. Vector with four integer values (i) nrow (ii) ncol (iii) resolution (iv) number classes.
# @param metrics Which landscape metrics are used to optimize. Only landscape level allowed.
# @param targets Target value of selected landscape metrics.
# @param energy_threshold If the relative deviation(energy) between the optimized value and the target value is smaller than threshold, algorithm stops.
# @param max_runs Maximum number of iterations.
# @param no_change Algorithm stopfs if the deviation (energy) does not decrease.
# @param progress Print progress report
# @param Arguments passed on to landscapemetrics::calculate_lsm
optimize_lsm <- function(landscape = NULL,
size = NULL,
metrics, targets,
energy_threshold = 0,
max_runs = 1000,
no_change = NULL,
progress = TRUE,
...) {
if (!all(metrics %in% landscapemetrics::list_lsm(level = "landscape",
simplify = TRUE, verbose = FALSE))) {
stop("Only landscape level metrics allowed. To see a list, please use landscapemetrics::list_lsm(level = 'landscape', simplify = TRUE).",
call. = FALSE)
}
if (length(metrics) != length(targets)) {
warning("Length of metric and target not equal. Using same target for all metrics.",
call. = FALSE)
targets <- targets[[1]]
}
if (is.null(landscape)) {
raster_values <- sample(x = 1:size[[4]],
size = size[[1]] * size[[2]],
replace = TRUE)
landscape <- raster::raster(matrix(data = raster_values,
nrow = size[[1]], ncol = size[[2]]))
raster::extent(landscape) <- c(0, ncol(landscape) * size[[3]],
0, nrow(landscape) * size[[3]])
}
# set no change to 75% of max runs
if (is.null(no_change)) {
no_change <- floor(max_runs * 0.75)
}
# counter if energy changed
counter <- 0
# calculate landscape metrics
metrics_result <- calculate_lsm(landscape, what = metrics,
verbose = FALSE,
full_name = TRUE,
...)
# same order as input
metrics_result <- metrics_result[order(match(metrics, metrics_result$function_name)), ]
# calculate energy
energy <- mean(abs(targets - metrics_result$value) / targets * 100)
# random cell ids (as many as max_runs)
random_id <- sample(1:raster::ncell(landscape), size = max_runs, replace = TRUE)
# random values
random_value <- unique(raster::values(landscape))
# it would also be possible to always switch to raster cells. But this would only
# configuration and not composition
# composition might be problematic because how to add or remove classes completely?
# simulated annealing - not longer than max_runs
for (i in seq_len(max_runs)) {
random_lsm <- landscape
# get current cell value
current_value <- random_lsm[random_id[[i]]]
random_lsm[random_id[[i]]] <- sample(random_value[!random_value %in% current_value], size = 1)
# calculate landscape metric after switching
metrics_result <- calculate_lsm(random_lsm, what = metrics,
verbose = FALSE,
full_name = TRUE,
...)
# same order as input
metrics_result <- metrics_result[order(match(metrics, metrics_result$function_name)), ]
# calculate difference
energy_random <- mean(abs(targets - metrics_result$value) / targets * 100)
# lower difference between target and landscape value -> keep random
if (energy_random < energy) {
# keep random landscape
landscape <- random_lsm
# keep enery_random as energy
energy <- energy_random
}
# no improvment
else {
counter <- counter + 1
}
# print progress
if (progress) {
message("\r> Progress: ", i, "/", max_runs, " || energy = ", round(energy, 2),
"% (threshold = ", energy_threshold, "%) \t\t\t",
appendLF = FALSE)
}
# break if annealing is good enough
if (energy <= energy_threshold || counter > no_change) {
break
}
}
if (progress) {
message("")
}
return(random_lsm)
}
result_a <- optimize_lsm(landscape = landscape,
metrics = c("lsm_l_split", "lsm_l_core_mn"),
targets = c(1.5, 0.005),
energy_threshold = 5,
max_runs = 5000, no_change = 5000,
progress = FALSE)
lsm_l_split(result_a)
#> # A tibble: 1 x 6
#> layer level class id metric value
#> <int> <chr> <int> <int> <chr> <dbl>
#> 1 1 landscape NA NA split 1.60
lsm_l_core_mn(result_a)
#> # A tibble: 1 x 6
#> layer level class id metric value
#> <int> <chr> <int> <int> <chr> <dbl>
#> 1 1 landscape NA NA core_mn 0.00485
plot(result_a) result_b <- optimize_lsm(size = c(30, 30, 20, 5),
metrics = c("lsm_l_split", "lsm_l_core_mn"),
targets = c(3.5, 0.005),
energy_threshold = 5,
max_runs = 5000, no_change = 5000,
progress = FALSE)
lsm_l_split(result_b)
#> # A tibble: 1 x 6
#> layer level class id metric value
#> <int> <chr> <int> <int> <chr> <dbl>
#> 1 1 landscape NA NA split 5.21
lsm_l_core_mn(result_b)
#> # A tibble: 1 x 6
#> layer level class id metric value
#> <int> <chr> <int> <int> <chr> <dbl>
#> 1 1 landscape NA NA core_mn 0.005
plot(result_b) Created on 2019-04-26 by the reprex package (v0.2.1) |
Would be nice to have the functionality of LG (http://www.lg.ethz.ch/).
Would also mean that we need landscapemetrics and NLMR as dependencies, as well as the GenAlg package.
Any opinions? Or another package for that?
The text was updated successfully, but these errors were encountered: