Zonal statistics in r

Zonal statistics in r DEFAULT
GRASS logo

Note: A new GRASS GIS stable version has been released: GRASS GIS , available here.
Updated manual page: here


r.stats.zonal- Calculates category or object oriented statistics (accumulator-based statistics).


raster, statistics, zonal statistics



r.stats.zonal --help

r.stats.zonal [-cr] base=namecover=namemethod=stringoutput=name [--overwrite] [--help] [--verbose] [--quiet] [--ui]


Cover values extracted from the category labels of the cover map
Create reclass map with statistics as category labels
Allow output files to overwrite existing files
Print usage summary
Verbose module output
Quiet module output
Force launching GUI dialog


base=name [required]
Name of base raster map
cover=name [required]
Name of cover raster map
method=string [required]
Method of object-based statistic
Options: count, sum, min, max, range, average, avedev, variance, stddev, skewness, kurtosis, variance2, stddev2, skewness2, kurtosis2
count: Count of values in specified objects
sum: Sum of values in specified objects
min: Minimum of values in specified objects
max: Maximum of values in specified objects
range: Range of values (max - min) in specified objects
average: Average of values in specified objects
avedev: Average deviation of values in specified objects
variance: Variance of values in specified objects
stddev: Standard deviation of values in specified objects
skewness: Skewness of values in specified objects
kurtosis: Kurtosis of values in specified objects
variance2: (2-pass) Variance of values in specified objects
stddev2: (2-pass) Standard deviation of values in specified objects
skewness2: (2-pass) Skewness of values in specified objects
kurtosis2: (2-pass) Kurtosis of values in specified objects
output=name [required]
Resultant raster map


r.stats.zonalis a tool to analyse exploratory statistics of a floating-point "cover layer" according to how it intersects with objects in a "base layer". A variety of standard statistical measures are possible (called "zonal statistics" in some GIS).


r.stats.zonalis intended to be a partial replacement for r.statistics, with support for floating-point cover maps at the expense of not supporting quantiles. For this, see r.stats.quantile.


In this example, the raster polygon map in the North Carolina sample dataset is used to calculate zonal raster statistics using the raster map:
g.region raster=zipcodes -p # pixel count in zipcode areas r.stats.zonal base=zipcodes cover=elevation method=count output=zipcodes_elev_count r.colors zipcodes_elev_count color=gyr -g # average elevation in zipcode areas r.stats.zonal base=zipcodes cover=elevation method=average output=zipcodes_elev_avg r.colors zipcodes_elev_avg color=elevation -g


r.quantile, r.stats.quantile, r.statistics


Glynn Clements

Last changed: $Date$


Available at: r.stats.zonal source code (history)

Note: A new GRASS GIS stable version has been released: GRASS GIS , available here.
Updated manual page: here

Main index | Raster index | Topics index | Keywords index | Graphical index | Full index

© GRASS Development Team, GRASS GIS dev Reference Manual

Sours: https://grass.osgeo.org/grass76/manuals/r.stats.zonal.html

Robust Zonal stats in R and QGIS

A Robust Zonal stats in QGIS can be implemented with PyQGIS. Following code was run with your layers by using a filter to select only ID_sub.pat between 19 and 27 (showed in your image).

After running code at Python Console of QGIS it was obtained this result:

where at first column are raster values, at second column weight factor and at third column id Polygons feature. You can observe that for polygons 20,21,26,27, its factors is always one (as expected). In other cases is proportional to intersection of Polygons feature with grid representing raster.

Results were corroborated with Value Tool and QuickWKT QGIS plugins and they were as expected.

answered Oct 24 '17 at


k44 gold badges silver badges bronze badges

Sours: https://gis.stackexchange.com/questions//robust-zonal-stats-in-r-and-qgis
  1. Imdb tv june 2021
  2. Sonic movie edit
  3. 2017 ford expedition
  4. Penn alumni weekend 2017

Zonal statistics is a useful tool that allows you to determine arbitrary statistics from raster pixels that intersect with, for example, a polygon from a vector layer. Most of the time, we have a default selection of several statistics in the software GIS such as maximum, average, minimum value, and so on. But what if we want to calculate a statistic that we have invented? In the software GIS it is not impossible, but it requires a lot of work , e.g. Modeler. Luckily we have R, where you can do it in a function:) Anyway, let&#;s start with simple things. We start the raster library, where there is the extract function, which can be used to extract the values of the raster pixels that are in the given vector:



We initialize the rgdal library, which we will use to load vector data:

Wczytujemy plik rastrowy oraz plik z warstwą wektorową np. z darmowych warstw udostępnianych przez CODGiK (w moim przypadku NMT i warstwa powiatów województwa mazowieckiego):

We load a raster file and a file with a vector layer, e.g. from the free layers (in our case, DEM and a layer with districts from Mazowieckie Voivodeship):




The loaded layers look like this:

Using the extract function from the raster library, we can extract the DEM cell values that are within the first two districts, for example:



The variable h generated is a list containing the elevation values for a given district in its individual elements. We call these values with:




For example, if we want to count the average height value for a given district, we need to add the attribute fun = mean to the extraction function:



When we call the variable h_mean in the console, we see that it is already an array of values:





We can add any function we create as an attribute fun to the extract function. Recall that for each district we need to get a vector of values and process it. Let&#;s define a function that creates a statistic that returns us the percentage of pixels with values greater than the average height in the district. First, we create the function our_function that contains only a local variable x, which is our set of pixel values, and a logical argument na.rm, which is responsible for removing the values from NA:

our_function function(x,na.rm=TRUE)



Let&#;s add a variable to the function that contains the mean value

Let&#;s count how many pixels there are in total:

and count how many are larger than avg:



Let&#;s count the percentage:



The function returns the percentage:

We close the function definition:

Our function should eventually look like this:

our_function function(x,na.rm=TRUE)















To get the percentage value, we add it to the fun attribute in the extract function:



The result is an array containing the percentage values from the function we defined:






We can modify the extract function depending on what data we want to get. We recommend having a look at the technical documentation of this function, or at the help:

Sours: http://geoprofesja.pl/en/zonal-statistics-in-r/


Build StatusBuild Statuscoverage reportCRANcran checks

is an R package that quickly and accurately summarizes raster values over polygonal areas, commonly referred to as zonal statistics. Unlike most zonal statistics implementations, it handles grid cells that are partially covered by a polygon. Typical performance for real-world applications is orders of magnitude faster than the package.

Example Graphic.

Calculations are performed using the C++ tool. Additional background and a description of the method is available here. Full package reference documentation is available here.

Basic Usage

The package provides an method that operates analogously to the method in the package. The snippet below demonstrates the use of this function to compute monthly mean precipitation for each municipality in Brazil.

library(raster) library(sf) library(exactextractr) # Pull municipal boundaries for Brazilbrazil<- st_as_sf(getData('GADM', country='BRA', level=2)) # Pull gridded precipitation dataprec<- getData('worldclim', var='prec', res=10) # Calculate vector of mean December precipitation amount for each municipalitybrazil$mean_dec_prec<- exact_extract(prec[[12]], brazil, 'mean') # Calculate data frame of min and max precipitation for all monthsbrazil<- cbind(brazil, exact_extract(prec, brazil, c('min', 'max')))

Summary Operations

can summarize raster values using several named operations as well as arbitrary R functions. Where applicable, a named operation will provide better performance and reduced memory usage relative to an equivalent R function. Named operations are specified by providing a character vector with one or more operation names to the parameter of .

The following summary operations are supported:

Sum of all cell coverage fractions.
(or )The raster value with the largest sum of coverage fractions.
Maximum value of cells that intersect the polygon, ignoring coverage fractions.
Mean value of cells that intersect the polygon, weighted by the fraction of the cell that is covered.
Median value of cells that intersect the polygon, weighted by the fraction of the cell that is covered.
Arbitrary quantile value of cells that intersect the polygon, weighted by the fraction of the cell that is covered.
Minimum value of cells that intersect the polygon, ignoring coverage fractions.
The raster value with the smallest sum of coverage fractions.
Sum of values of raster cells that intersect the polygon, with each raster value weighted by its coverage fraction.
The number of distinct raster values in cells wholly or partially covered by the polygon.
The population variance of cell values, weighted by the fraction of each cell that is covered by the polygon.
The population standard deviation of cell values, weighted by the fraction of each cell that is covered by the polygon.
The population coefficient of variation of cell values, weighted by the fraction of each cell that is covered by the polygon.

Two additional summary operations require the use of a second weighting raster, provided in the argument to :

Mean defined value of cells that intersect the polygon, weighted by the product of the coverage fraction and the value of a second weighting raster.
Sum of defined values of raster cells that intersect the polygon, multiplied by the coverage fraction and the value of a second weighting raster.

Weighted usage is discussed in more detail below.

Undefined () values are ignored in all of the named summary operations when they occur in the value raster. When they occur in the weighting raster, they cause the result of the summary operation to be .

Summary Functions

In addition to the summary operations described above, can accept an R function to summarize the cells covered by the polygon. Because takes into account the fraction of the cell that is covered by the polygon, the summary function must take two arguments: the value of the raster in each cell touched by the polygon, and the fraction of that cell area that is covered by the polygon. (This differs from , where the summary function takes the vector of raster values as a single argument and effectively assumes that the coverage fraction is .)

An example of a built-in function with the appropriate signature is . Some examples of custom summary functions are:

# Number of cells covered by the polygon (raster values are ignored) exact_extract(rast, poly, function(values, coverage_fraction) sum(coverage_fraction)) # Sum of defined raster values within the polygon, accounting for coverage fraction exact_extract(rast, poly, function(values, coverage_fraction) sum(values*coverage_fraction, na.rm=TRUE)) # Number of distinct raster values within the polygon (coverage fractions are ignored) exact_extract(rast, poly, function(values, coverage_fraction) length(unique(values))) # Number of distinct raster values in cells more than 10% covered by the polygon exact_extract(rast, poly, function(values, coverage_fraction) length(unique(values[coverage_fraction>])))

Weighted Usage

allows for calculation of summary statistics based on multiple raster layers, such as a population-weighted temperature. The weighting raster must use the same coordinate system as the primary raster, and it must use a grid that is compatible with the primary raster. (The resolutions and extents of the rasters need not be the same, but the higher resolution must must be an integer multiple of the lower resolution, and the cell boundaries of both rasters must coincide with cell boundaries in the higher-resolution grid.)

One application of this feature is the calculation of zonal statistics on raster data in geographic coordinates. The previous calculation of mean precipitation amount across Brazilian municipalities assumed that each raster cell covered the same area, which is not correct for rasters in geographic coordinates (latitude/longitude).

We can correct for varying cell areas by creating a weighting raster with the area of each cell in the primary raster using the function from the package.

Weighted Summary Operations

Performing a weighted summary with the and operations is as simple as providing a weighting or to the argument of .

The area-weighted mean precipitation calculation can be expressed as:

brazil$mean_dec_prec_weighted<- exact_extract(prec[[12]], brazil, 'weighted_mean', weights= area(prec))

With the relatively small polygons used in this example, the error introduced by assuming constant cell area is negligible. However, for large polygons that span a wide range of latitudes, this may not be the case.

Weighted Summary Functions

A weighting raster can also be provided when an R summary function is used. When a weighting raster is provided, the summary function must accept a third argument containing the values of the weighting raster.

An equivalent to the usage above could be written as:

brazil$mean_dec_prec_weighted<- exact_extract(prec[[12]], brazil, function(values, coverage_frac, weights) { weighted.mean(values, coverage_frac*weights) }, weights= area(prec))

Or, to calculate the area-weighted mean precipitation for all months:

brazil<- cbind(brazil, exact_extract(prec, brazil, function(values, coverage_frac, weights) { weighted.mean(values, coverage_frac*weights) }, weights= area(prec), stack_apply=TRUE))

In this example, the argument is set to so that the summary function will be applied to each layer of independently. (If , the summary function will be called with all values of in a column data frame.)

Additional Usages

Multi-Raster Summary Functions

A multi-raster summary function can also be written to implement complex behavior that requires that multiple layers in a be considered simultaneously.

Here, we compute an area-weighted average temperature by calling with a of minimum and maximum temperatures, and a , of cell areas.

tmin<- getData('worldclim', var='tmin', res=10) tmax<- getData('worldclim', var='tmax', res=10) temp<- stack(tmin[[12]], tmax[[12]]) brazil$tavg_dec<- exact_extract(temp, brazil, function(values, coverage_fraction, weights) { tavg<*(values$tmin12+values$tmax12) weighted.mean(tavg, coverage_fraction*weights) }, weights= area(prec))

When is called with a of values or weights and (the default), the values or weights from each layer of the will be provided to the summary function as a data frame.

In the example above, the summary function is provided with a data frame of values (containing the values for each layer in the stack), a vector of coverage fractions, and a vector of weights.

Multi-Valued Summary Functions

In some cases, it is desirable for a summary function to return multiple values for each input feature. A common application is to summarize the fraction of each polygon that is covered by a given class of a categorical raster. This can be accomplished by writing a summary function that returns a one-row data frame for each input feature. The data frames for each feature will be combined into a single data frame using using or, if it is available, .

In this example, the mean temperature for each municipality is returned for each altitude category.

altitude<- getData('alt', country='BRA') prec_for_altitude<- exact_extract(prec[[12]], brazil, function(prec, frac, alt) { # ignore cells with unknown altitudeprec<-prec[!is.na(alt)] frac<-frac[!is.na(alt)] alt<-alt[!is.na(alt)] low<-!is.na(alt) &alt<high<-!is.na(alt) &alt>=data.frame( prec_low_alt= weighted.mean(prec[low], frac[low]), prec_high_alt= weighted.mean(prec[high], frac[high]) ) }, weights=altitude)


can rasterize polygons though computation of the coverage fraction in each cell. The function returns a with values from 0 to 1 indicating the fraction of each cell that is covered by the polygon. Because this function generates a for each feature in the input dataset, it can quickly consume a large amount of memory. Depending on the analysis being performed, it may be advisable to manually loop over the features in the input dataset and combine the generated rasters during each iteration.

Performance and Accuracy

An example benchmark using the example data is shown below. The mean execution time for was seconds, vs for . Timing was obtained from execution on an AWS instance.

microbenchmark( a<- exact_extract(prec[[12]], brazil, weighted.mean), b<- extract(prec[[12]], brazil, mean, na.rm=TRUE), times=5) # Unit: seconds# expr min lq mean median uq max neval# a <- exact_extract() 5# b <- extract() 5

Results from are more accurate than other methods because raster pixels that are partially covered by polygons are considered. The significance of partial coverage increases for polygons that are small or irregularly shaped. For the Brazilian municipalities used in the example, the error introduced by incorrectly handling partial coverage is less than 1% for 88% of municipalities and reaches a maximum of 9%.

Although is fast, it may still be slower than the command-line tool. However, some efficiencies, such as the simultaneous processing of multiple layers in a , are only possible in .


Installation requires version or greater of the GEOS geometry processing library. It is recommended to use the most recent released version () for best performance. On Windows, GEOS will be downloaded automatically as part of package install. On MacOS, it can be installed using Homebrew (). On Linux, it can be installed from system package repositories ( on Debian/Ubuntu, or on CentOS/RedHat.)

Sours: https://github.com/isciences/exactextractr

Statistics in r zonal

zonal.stats: zonal.stats


Polygon zonal statistics of a raster


zonal.stats(x, y, stats = c("min", "mean", "max"))



Polygon object of class SpatialPolygonsDataFrame


rasterLayer object of class raster



data.frame, nrow(x) and ncol of function results


# NOT RUN { library(raster) library(sp) # skewness function skew <- function(x, na.rm = FALSE) { if (na.rm) x <- x[!is.na(x)] sum( (x - mean(x)) ^ 3) / ( length(x) * sd(x) ^ 3 ) } # percent x >= p function pct <- function(x, p=, na.rm = FALSE) { if ( length(x[x >= p]) < 1 ) return(0) if ( length(x[x >= p]) == length(x) ) return(1) else return( length(x[x >= p]) / length(x) ) } # create some example data p <- raster(nrow=10, ncol=10) p[] <- runif(ncell(p)) * 10 p <- rasterToPolygons(p, fun=function(x){x > 9}) r <- raster(nrow=, ncol=) r[] <- runif(ncell(r)) plot(r) plot(p, add=TRUE, lwd=4) # run zonal statistics using skew and pct functions z.skew <- zonal.stats(x = p, y = r, stats = "skew") z.pct <- zonal.stats(x=p, y=r, stats = "pct") ( z <- data.frame(ID = as.numeric(as.character(row.names([email protected]))), SKEW=z.skew, PCT=z.pct) ) # }
Sours: https://www.rdocumentation.org/packages/spatialEco/versions//topics/zonal.stats
Zonal statistics in QGIS

But you should have seen this audience: it was all in vain. With great difficulty, she perched herself on the table (this was the first wish that Lena's husband expressed) and, with shaking hands, with tears. In her eyes, began to unbutton her blouse. The excitement of the audience grew. Stayed in a skirt and bra.

You will also be interested:

My princess, I never. - voice trembled and Svetlana began to shake in sobs. On this dramatic note, the conversation between the woman and the girl was interrupted. The stranger's hands gently grabbed the crying hair and lifted her face up. Obviously, the princess liked crying, she looked at her victim for a few seconds, then licked the salty path on her cheek with her tongue and moved to the.

42093 42094 42095 42096 42097