Τετάρτη 29 Ιουλίου 2015

Neocartography Can Be Beautiful, Cartography Can Be Fast



Nicolas Regnauld, Product Manager at 1Spatial discuss the debate about neocartography versus traditional cartography and the evolution towards creating more intelligent automation tools for creating maps.
Like the neogeography buzzword that emerged a few years ago, neocartography is a more recent term that also divides the discussion into two camps: traditional cartographers who argue ‘maps need humans to create them using psychology and aesthetics’ verses neocartographers; ‘you have to automate map generation in order to be responsive and accessible’.

The rise of neocartography has been recognized by the International Cartographic Association (ICA) which created a new commission on Neocartography in July of 2014. The creation of this commission at the ICA highlights the growth of this ‘non expert group’ of mappers (ED Parsons)[1] and should promote knowledge exchange between these two groups: non-expert mappers and experienced cartographers. This is a view supported by the chair of the commission Steve Chiltern, who said long before the start of this commission: “My contention is that cartographers need to embrace these neo-cartographers, and work with them in the way that they possibly didn’t with GIS providers/users, and to get out there and influence the way we look at the world”.[2]

One of the problems with neocartography highlighted by R Weibel and G Dutton[3] is the visualisation at different scales which traditionally has relied on manual/human intervention to turn detailed/large scale data into intelligently simplified maps following cartographic principles. Neocartography aims to produce maps from widely-available data sources and tools, with little manual intervention and modification of the data. As a result, it has to rely on quite blunt techniques such as excluding certain data layers at smaller scales, filtering vertices and then just allowing the resulting features’ representations to overlap.

I have worked for many years on automating such process, commonly referred to as generalization, of simplifying detailed spatial data to be able to make maps at smaller scales: first in academia to prove the concept, then at Ordnance Survey GB to develop bespoke generalization processes suitable for production. Now at a commercial software company (1Spatial), the focus is on developing generic generalization solutions which can easily be deployed to a variety of organisations. These initiatives were driven by the need to provide National Mapping Agencies with faster and more cost-effective ways of deriving their maps from their large scale data, without sacrificing the cartographic quality. However, such techniques could also be valuable to the neocartographer. This time the software can help the neocartographer reduce the density of data in a sensible way, in order to produce better quality maps without sacrificing speed and agility.

In the blog ‘The Pitfalls of crowdsourced cartography’ Alan McConchie (an author on the Mapping Mashups blogs site) wrote about the problem faced of ‘spatially varying map quality’. This explains how online maps can sometimes grow at different rates from area to area and at different levels of accuracy and detail. McConchie goes on to say how ‘new techniques for rendering crowdsourced map data will need to be developed that can gracefully handle differences in the level of detail present in the database.’.[4] Again, generalization tools are a plausible solution to this problem. When wanting to map an area which has been captured with varying levels of detail, they can all be generalized to a common level of detail, according to the need of the targeted map.

My exploration work of generalizing crowd-sourced data using 1Spatial’s 1Generalise product is good example of such new techniques in automating map production work, whilst still applying cartographic principles. Crowdsourced data is by its nature a detailed, large-scale dataset, because contributors capture what they see on the ground: buildings, roads, rivers etc. What is often missing is the intelligent aggregation, omission or exaggeration of these features into more abstract concepts that are useful at smaller scales. For example as the scale gets smaller (or in neocartography terms, when you zoom out) a cluster of several buildings could be shown as a couple of simplified rectangles, then as an urban block filling the enclosing roads, then amalgamated as part of a larger town area, then replaced by a town point.

1SPATIAL – OSM DATA STYLED FOR 1:50 DISPLAY


1SPATIAL – GENERALIZED DATA FOR 1:50 DISPLAY

Equally, value judgements may be needed when excluding roads from a network. Rather than just suddenly excluding all minor roads at certain scale it is better to do this selectively and ask: how significant is this minor road? Is it an insignificant short dead-end? Or is it a short dead-end that leads to a hospital?

These sorts of decisions can be automated by rules-based generalization tools because the subjective decisions made by cartographers can be encoded as rules. This means that the effort required to apply cartographic modifications on a per-map basis can instead be invested in the definition of the set of rules that embody this knowledge and so the return on this investment happens every time the map data is processed by the rules. Once configured, this system can automatically produce high-quality cartography from raw data in a repeatable way every time the base data is refreshed.

In conclusion, whether the data set is large or smaller, you can rapidly and automatically generate clear and useable maps using neocartography techniques, but to achieve this you need tools which can encode and automate traditional cartographic skills. Automatic generalization tools will bring to the cartographers the speed that they need, and give the neocartographer better control over the content of their map to increase the quality. So what will come first: the cartographer becoming a neocartographer, or the neocartographer becoming a better cartographer? The race is on, but tomorrow’s cartographer will make good quality maps from a wide range of data sources in little time.

References

[1] Ed Parsons http://www.edparsons.com/2011/03/and-now-there-is-neocartography/ (2011)

[2] Steve Chilton http://googleearthdesign.blogspot.co.uk/2007/08/steve-chilton-interview.html (2007)

[3] R Weibel and G Dutton Generalising spatial data and dealing with multiple representationshttp://www.geos.ed.ac.uk/~gisteac/gis_book_abridged/files/ch10.pdf (1997)

[4] The pitfalls of crowdsourced cartography http://mappingmashups.net/2010/11/03/the-pitfalls-of-crowdsourced-cartography/ (2010)


Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου