Παρασκευή 31 Ιουλίου 2015

A Case Study in Environmental GIS: Light Pollution Mapping



Marcus Hinds, a geospatial consultant, shares the results of his work with using remote sensing and methodologies from environmental GIS to better understand and provide solutions to mediate light pollution in the Greater Toronto Area in Canada.

Nathan Heazlewood in one of his recent blurbs urged us Geomatics practitioners to be proud of the Geospatial profession, in his article “take pride in the Geospatial Profession“. GIS and Geomatics are a large part of many environmental projects, because, let’s face it, environmental projects have to occur in time and space. That space is always located on the surface or beneath the earth, and persons responsible for the progress of the project need to know the specs of the project like what is happening, where it’s happening, why it’s happening and who is doing it. Every event is linked to the project in some way.

I’m no stranger to environmental GIS projects. Many of these (GIS projects) projects cross into other disciplines such as Energy, Finance, and Engineering and probably the most controversial of all, Politics.


Light Pollution Mapping in Toronto, Canada

One example of how interdisciplinary an environmental GIS project can become, is one that I recently worked on; the Light Pollution Initiative with the City of Toronto. The City was looking to reduce its lighting footprint and find ways of informing Greater Toronto Area (GTA) residents, about sources and effects of light pollution. Light pollution in this case is mainly classified, as any form of up-lighting and or over-lighting that emits unwanted light into the night sky, also known as sky glow. SKy-glow has a number of harmful and non-harmful effects, but the most popular has to be when light spreads to suburban and rural areas and drowns the night sky and stars. Environmental Heath Perspective Research has shown that star gazing and night sky observation is on a rapid decline in the younger generation, simply because we can’t see the night sky in the majority of our cities. Another effect is that deciduous trees have delayed adaption to season changes because of prolonged exposure to light. Wildlife like turtles, birds, fish, reptiles, and other insects show decreased reproduction due to higher levels of light in previously dark habitats. I didn’t even mention the increased risk of smog in urban areas, preceding periods of heavy light pollution. In us humans, light pollution has been linked to the cause of sleep deprivation in the short term, melatonin deficiency, increased risk of cancer (breast and prostate), obesity, and raises the probability of early-onset diabetes in the long term.


Because a project like is sensitive to so many variables, like the layout of power grid, culture of the city, socioeconomic classes, and urban design of the city; it was a very multidisciplinary feat, which required tactical thinking. The response needed to be tailored from principles from Urban Planning, Environmental Engineering, Architecture and Ecology. The fact that the end user was a broad, largely non technical audience also had to be factored in. As I got to working, I quickly realized that this is an onion. The more you look at it, the more layer you find. Before I knew, I had to think about Illumination Engineering, Power Generation and Energy Efficiency, due to the hundreds of megawatt hours in electricity being consumed by up-lighting and over-illumination, adding stress to an already stressed set of grid infrastructure. I also had to think about the health care system because, any ailments stemming from light pollution will add casualties to the health care system. I quickly noticed how broad (and valuable) environmental GIS really is.

My original thought of the project’s response, was to go about highlighting light pollution hotspots throughout the greater city area and compare it to data coming out the electricity provider. As suspected, the brightest areas on the maps, where the most energy intensive areas of the grid. The real challenge though, was to highlight light pollution at night, when all the base maps available are “of day”, then how do I communicate this? Well just bled the two came to mind, and that I did.

To find the light pollution hotspots, I got a Google base map and overlayed a geo-referenced satellite light pollution image of the city from NASA’s International Space Station (ISS). Areas of bright up lighting and sky glow around the city were obvious to the naked eye; but I wanted to show more. I applied lighting standards from the Illuminating Society of North America (IESNA), which meant that IESNA’s effective lighting series was now involved.

I used RP-8 for street and roadway lighting, RP-6 for sports and recreational lighting, RP-33 for outdoor lighting schemes. RP-2 for mercantile areas and RP-3 for schools and educational facilities. Each standard had prescribed lighting thresholds, which suggest efficient and appropriate light levels that should be used for each application. Each standard also discussed the type and quality of light suitable for each application. Now; only to find how much over the lighting threshold each point of sky glow on my map wa, and use this to estimate energy use figures.


I determined the areas brighter than the lighting threshold, through blending the geo-referenced base map and the NASA light pollution image together in Image J image processing software, and passing that image through filters. Image J is an open source Java based image processing software. The first image filter I used was the Gaussian High Pass filter for image sharpening the image, in order to highlight areas of bright light contrasted against dark areas. Then I applied a Gaussian Low Pass for smoothing the image and highlighting the contrast between bright pixels and dark pixels. Finally I added the nearest neighbor filter in order to generalize individual points of up-lighting, and spread the pixels showing sky glow evenly around each area of up-lighting. This method highlighted individual points in the GTA that were contributing to up-lighting, but I still needed to find the amount of light generated by each point of up-lighting and the value that each point stands above the lighting threshold set out in IESNA’s standards.

Since Image J does not have the capacity to calculate exact threshold, I had to find another open source software package that was easy to use and was Java based as Image J was. My rebuttal was Open Source Computer Vision, better known as Open CV image processing software. I used the blended image output from Image J, input that into Open CV and made some copies of it. This process called simple thresholding was applied in series. The first image was greyscaled in order to assign a value to each pixel in the image; the second image was used to classify pixel values and the third used to set a lower lighting threshold value. These three images were then overlayed pasted onto to each other and were made transparent in order to see the detail on all three images. This led to a pixel value being assigned to each pixel, and being able to determine the value of how much each pixel was over the defined threshold. This order of filtering was suggested by an Open CV technician and delineated light pollution areas around the GTA with high precision. Open CV is very well suited to working with environmental GIS and has a strong point in working with polygons in photo interpretation.


AERIAL IMAGE OF THE GREATER TORONTO AREA (GTA), SHOWING LIGHT POLLUTION HOTSPOTS IN WHITE LIGHT. MAJOR STREETS AND HIGHWAYS CAN CLEARLY BE IDENTIFIED.

In retrospect, I’ve seen a couple photometric surveys of cities in my time, and I must say that the data created from this project is simultaneous with photometric surveys. And the most intriguing part is that it all happened through remote sensing.

The outcome of this survey is ongoing, but there are a number of items in progress:

  • The City releasing documents surrounding the use of decorative lighting and its contribution to the skyline, noting that this form of lighting should not only be efficient and sustainable and should comprise LED’s, but should be turned off during migration periods for migratory birds. See Page 60 of Tall Building Design Guidelines (link:http://goo.gl/ddANm0)
  • Many condo developers are now turning decorative lighting off at 11pm in the downtown core to facilitate light pollution standards and migratory bird guidelines set out by FLAP on the Flap Website (link:http://www.flap.org/)
  • Discussion and literature has been released and in circulation for sometime, evaluating the efficiency of buildings that use glass as the main material in the building facade. Glass facades not only cause the building to be more energy intensive but also pose a hazard to birds, where they usually become disorientated and collide with the building, inflicting serous injuries and death. There have also been many cases of glass falling from these buildings in many Canadian cities. Building scientist Ted Kesik, a prominent building scientist based in Toronto, has estimated that condo pricing and maintenance fees to skyrocket in the next decade, simply because of the use of glass. The Condo Conundrum 
  • Discussion surrounding implementing guidelines for the GTA to implement full cut off/fully shielded light fixtures for outdoor lighting, as some parts of Ottawa have done. See the Report to the Planning & Environmental Committee submitted by Deputy City Manager Planning, Transit and Environment, City of Ottawa

References: Skyglow/Light Pollution – NASA

River Width GIS Data Created from 1,756 Landsat Images



BY CAITLIN DEMPSEY MORAIS




Researchers used 1,756 Landsat images to develop a GIS database of river widths for the entire North American continent. Previously, river widths were estimated based on topographic maps and calculating the river discharge measured at certain points along the watershed. Using software developed in Exelis VIS IDL, hydrologists from the University of North Carolina were able to extract river width calculations using remote sensed data.

Called North American River Width Data Set (NARWidth), the GIS data is available for downloading in shapefile format with by tile or as a bulk download. In order to avoid seasonal variations in river widths, images for each river were selected during a time of that river’s average flow.

The resulting dataset contains attribute data about the river’s width, the number of channels/braids, whether the segment is a river or a reservoir as well as lat/long coordinates. The resulting map of the river width data shows that the widest rivers are found along the Yukon in Alaska, the Mackenzie in Canada’s Northwest Territories, the Hudson in New York, the St. Lawrence along the border of New England and Quebec, and the Mississippi.

The availability of a more accurate river width database will allow for the analyses of flood hazards, studies of ecological diversity, and estimates of the volume of greenhouse gases released by rivers and reservoirs due to bacterial activity.

More: A Satellite View of River Width

MAP OF RIVER WIDTHS FOR NORTH AMERICA. THE DARKER THE BLUE, THE WIDER THE RIVER. SOURCE: RIVER WIDTH MAP JOSHUA STEVENS, NASA, USING DATA FROM ALLEN, G. H., & PAVELSKY, T.M. (2015).

References

Allen, G. H., Pavelsky T.M., (2015), Patterns of river width and surface area newly revealed by the satellite-derived North American River Width data set. Geophysical Research Letters. doi: 10.1002/2014GL062764

Pavelsky, T.M. and L. C. Smith , RivWidth: A Software Tool for the Calculation of River Widths from Remotely Sensed Imagery, IEEE Geoscience and Remote Sensing Letters, v. 5 no. 1, January 2008.

A3 Edge Digital Mapping System Upgrades Oblique Capabilities



VisionMap announced today the introduction of upgraded oblique capabilities to its A3 Edge Digital Mapping System. The A3 Edge camera, well known for its high capture productivity, now utilizes a proprietary Roll Stabilization Technology that increases its efficiency even further, particularly for oblique projects.

The A3 Edge camera collects images by means of two telescopes that “sweep” from side to side to create an extremely wide 106° field of view. Each sweep captures oblique and vertical images simultaneously. The new Roll Stabilization Technology shortens the time it takes to complete each sweep, allowing for even faster coverage of the entire area. Productivity is increased for orthophoto, and especially for oblique production.

The A3 Edge Digital Mapping Camera and another oblique camera both surveyed an area of 1400 km2 in Georgia, US.A3 Edge completed the survey in 3.5 hours at 16 cm GSD, collecting vertical and oblique imagery of the entire area, while the other camera took 7.1 hours to cover the area at 25 cm GSD.

VisionMap’s A3 LightSpeed automatically processed the imagery and generated a 16 cm orthophoto in 26 machine hours.The other camera’s imagery required 50 hours of machine processing to generate the 25 cm orthophoto. The huge difference was in the amount of manual work – the other camera required an additional 100 hours of manual processing, whereas the A3 data from LightSpeed required less than 10 hours.

Overall, the highly productive A3 Edge system completed the project over two times faster.

About VisionMap
VisionMap is a leading provider of digital mapping systems. VisionMap offers the A3 family of large format digital mapping cameras designed to address specific needs within the geospatial mapping industry, together with “LightSpeed”, a matching fully automatic ground processing unit.


Πέμπτη 30 Ιουλίου 2015

Overview of Fuzzy Logic Site Selection in GIS



BY AMANDA BRINEY




Site selection, also called suitability analysis, is a type of GIS analysis that is used to determine the best site for something. Potential sites used in suitability analysis can include businesses such as a store or city facilities like a hospital or school. Site selection can also be used to determine ideal habitat for a specific plant or animal species. When performing site selection analysis in GIS users must set various criteria so that the best or ideal sites can be rated based on this criteria.

Fuzzy logic is one type of commonly used type of site selection. It assigns membership values to locations that range from 0 to 1 (ESRI). 0 indicates non-membership or an unsuitable site, while 1 indicates membership or a suitable site. Fuzzy logic site selection is different from other site selection methods because it represents a possibility of an ideal site, rather than a probability and it is commonly used to find ideal habitat for plants and animals or other sites that are not specifically chosen by a user or developer (ESRI).


How to Use Fuzzy Logic Site Selection
Like other site selection methods, fuzzy logic uses a standard workflow to ensure that all necessary steps are followed. It is different from other methods however because it is much more complex and uses a continuum of values between 0 (completely false or unsuitable) and 1 (completely true or suitable) rather than a simple yes or no (ESRI). Fuzzy logic is capable of examining conditions that can be both true and false at the same time.


The standard workflow for fuzzy logic is as follows:
  1. Define the problem and site selection criteria
  2. Collect criteria layers
  3. Assign fuzzy membership values
  4. Perform fuzzy overlay
  5. Verify and apply results

Defining the problem and selection criteria is the most important step in fuzzy logic site selection because it helps the user to determine the type of data needed for the analysis. Fuzzy logic membership (discussed further below) is an important reclassification step. Reclassification is used to simplify the interpretation of raster data by changing a single input value into a new output value (ESRI). Fuzzy overlay allows the user to overlay the various reclassified layers to analyze the possibility of a specific occurrence. This can then be used to verify the results and use them to choose the best site.


Fuzzy Logic Membership
Fuzzy logic membership helps the user to determine the likelihood that a site is suitable or unsuitable. This step assigns values from 0 to 1 with 0 being not likely or unsuitable and 1 being most likely or suitable (ESRI). Thus, the higher the fuzzy membership value, the more ideal the site. When assigning fuzzy membership values it is important to understand the four types of membership and choose the one that best fits the analysis criteria. These membership types are as follows:
  1. Linear – High fuzzy membership is assigned to large or small values and fuzzy membership decreases at a constant rate.
  2. Small – High fuzzy membership is assigned to small values.
  3. Large – High fuzzy membership is assigned to large values.
  4. MS Small – High fuzzy membership is assigned to values less than the mean.
  5. MS Large – High fuzzy membership is assigned to values more than the mean.
  6. Near – High fuzzy membership is assigned to mid-range values.



GRAPHIC EXAMPLE OF THE MEMBERSHIP FUNCTION TALLNESS. SOURCE: RAINES, SAWATZKY, AND BONHAM-CARTER, ESRI.


Fuzzy Logic Overlay

Once the appropriate fuzzy membership value for data criteria is assigned several reclassified surfaces showing a value from 0 to 1 are generated (ESRI). The next step in applying fuzzy logic is to overlay these surfaces. This step is similar to weighted site selection (a site selection type that allows users to rank raster cells and assign a relative importance value to each layer) because the different reclassified surfaces are compared to each other (ESRI). To complete this step, one of several fuzzy overlay types must be chosen. The fuzzy overlay types are as follows:
  1. And – This type is best used for finding the locations that meet all criteria.
  2. Or – This type is best used for finding the locations that meet any of the criteria.
  3. Product – This type is best used for finding the best locations with combined input fuzzy membership values (ESRI).
  4. Sum – This type is best used for finding all possibly suitable locations with combined input fuzzy membership values (ESRI).
  5. Gamma – This is a complex fuzzy overlay type that requires expert knowledge and a combination of various sub-models.

PREDICTING THE POST APOCALYPTIC GEOGRAPHY OF ZOMBIES USING FUZZY LOGIC. SOURCE: THE UNDEAD LIVEN UP THE CLASSROOM, EDWARD GONZÁLEZ-TENNANT.


When to Use Fuzzy Logic
Because there are several different site selection methods it is important to understand when to use a complex method like fuzzy logic. Fuzzy logic site selection is most commonly in projects that have an element of uncertainty or where the user cannot state specifically where a site would be as would be the case of an ideal site found with weighted site selection (ESRI).

Fuzzy logic site selection is also ideal for analyzing data that does not have discrete polygons and boundaries as would be the case with a new sporting venue for instance (ESRI). Instead, fuzzy logic can be used to look at areas of deer habitat based on a factor such as elevation. Potential habitat types could be classified based on elevation levels. In this example, low elevations would be considered suitable habitat and given values close to 1 while high elevations would be unsuitable and have values closer to 0.

References

ESRI. (n.d.). “Using Raster Data for Site Selection.” ESRI Virtual Campus. Personal Notes. (Course Taken 2 April 2014).

Further Reading
Incorporating Expert Knowledge – New fuzzy logic tools in ArcGIS 10 By Gary L. Raines, Don L. Sawatzky, and Graeme F. Bonham-Carter


GIS Data and the Coastline Paradox



BY JOE AKINTOLA




Imagine that you’re taking a GIS class and your instructor tasks everyone with coming up with the answer to, “what is the length of the coastline of Maine?” Everyone downloads a different GIS data set to calculate the length and everyone comes back with a completely different answer to that question.

The phenomenon is known as the Coastline Paradox. This phenomenon has raised interesting questions in the world of Geography and how differing resolutions of data covering the same geographic area can yield remarkably different measurements of length. Questions such as: “How long is the coastline of Australia?” or “did you know that the coast of the U.S. state of Maine is longer than the coast of California?” become harder to answer consistently and depend greatly on the resolution of the GIS data used to measure the coastlines in question.

So what is the Coastline Paradox? It is a paradox that occurs when measuring a coastline that causes the total length of the coastline to increase each time you measure it with a smaller unit of measurement, due to the extra features that can be measured.

What does that mean? Just imagine you were told to manually measure the length of a jagged feature maybe a map and you have got no ‘thread’ to measure with. What do you do? You get a meter rule, right? However, the result you get will now be based on the length and size of the rule you use. The smaller the meter rule the more precise your measurement will be and vice versa. Using a few straight lines to approximate the length of a curve will produce a low estimate. That is why you use a thread to get the most precise measurement possible.

The coastline is the most obvious example of this situation due to its fractal-like (jagged recurring pattern) properties. This phenomenon was first observed by Lewis Fry Richardson (1881 – 1953) and is sometimes referred to as the Richardson effect.

The advancements in technology and computing and its consequent adoption in the field of geography especially the visualization and management of geographically referenced information which includes GPS (global positioning systems) data and remotely sensed imagery with the use of software specifically geographic information systems (GIS) and remote sensing (RS) has really increased our ability to create maps of the world’s coastline in a more precise way and in some way deal with the paradox to a large extent.

So the next time you are tasked with measuring the length of a coastline, river, or other lengthy feature, consider the resolution of the GIS data you need to use. As discussed in an earlier article, larger scale GIS data sets tend to show more detail than smaller scale data.



AN EXAMPLE OF THE COASTLINE PARADOX. IF THE COASTLINE OF GREAT BRITAIN IS MEASURED USING UNITS 100 KM (62 MI) LONG, THEN THE LENGTH OF THE COASTLINE IS APPROXIMATELY 2,800 KM (1,700 MI). WITH 50 KM (31 MI) UNITS, THE TOTAL LENGTH IS APPROXIMATELY 3,400 KM (2,100 MI), APPROXIMATELY 600 KM (370 MI) LONGER. SOURCE: WIKIPEDIA




References

Coastline paradox: http://en.wikipedia.org/wiki/Coastline_paradox

Fractal: http://en.wikipedia.org/wiki/Fractal

Mandelbrot, B. B. “How Long Is the Coast of Britain.” Ch. 5 in The Fractal Geometry of Nature. New York: W. H. Freeman, pp. 25-33, 1983.

Mapping Monday: The Coastline Paradox: http://blog.education.nationalgeographic.com/2013/01/28/mapping-monday-the-coastline-paradox

Coastline paradox: http://mathworld.wolfram.com/CoastlineParadox.html

The Coastline Paradox: How can one coastline be two different lengths?: http://www.richannel.org/the-coastline-paradox

What Is The Coastline Paradox?: http://www.youtube.com/watch?v=I_rw-AJqpCM

Yotta Navigates Bournemouth’s Drive to Digitalisation



A range of software and support services from Yotta is supporting the UK’s Bournemouth Borough Council in its drive towards digitalisation. In a coordinated upgrade programme, Bournemouth has moved both its Mayrise Waste and Highways software to a hosted solution. The Council has also introduced mobile working and integrated the back office systems with front office solutions, including its CRM and website. This programme of improvements is designed to improve customer service, increase operational efficiencies and realise cost savings.



“A review of existing IT infrastructure and current working practices identified a number of areas for potential improvement,” commented Jane A’Court, Business Development and Accounts Manager at Bournemouth Borough Council. “By moving our Mayrise software to the online service, we have been able to benefit from the latest software upgrades as well as Yotta’s excellent support service. In addition, the hosted software will also be able to integrate with our CRM function, giving residents 24/7 access to information and the ability to report issues and log requests.



“The introduction of mobile working for Highways Inspectors is already improving the operational efficiency of planned inspections and ad-hoc reports. Using Wi-Fi and 3G enabled devices with mapping, Inspectors can receive daily inspection routines, file completed reports and respond to requests from the public without having to return to the office.”



Bournemouth Borough Council is a long term user of Yotta’s Mayrise software to support the delivery of both domestic and commercial waste and highway services. The recent move to online versions of the Mayrise systems has enabled the Council to benefit from recent software upgrades as well as the improved reliability and security of the hosted solutions. The use of cloud based software has also facilitated the integration of the back office systems with frontline customer support software, improving customer service levels.



Highways Inspectors armed with Honeywell Dolphin devices are also benefiting from the recent programme of upgrades. Using a combination of both Wi-Fi and 3G, they automatically receive the daily inspection routines direct to their handhelds, together with reports received from members of the public and other Council staff. Completed inspection reports can be automatically filed to the back office system, where the results are logged and works orders raised. Council Pest Controllers have also benefited from mobile working with increased efficiencies enabling more jobs to be completed and therefore more income for the Council. Furthermore, the introduction of mobile working in general has realised a 15 percent reduction in office space required by staff now working in the field.


TeleCommunication Systems Adds 14 U.S. Patents Advancing Public Safety, Location-Based Services, GIS/Mapping, Wireless Data and Messaging




(PRNewswire) — TeleCommunication Systems, Inc. (TCS) (NASDAQ: TSYS), a world leader in secure and highly reliable wireless communication technology, today announced that the U.S. Patent and Trademark Office (USPTO) issued twelve new patents to TCS. TCS also purchased two U.S. patents, bringing the second quarter total to 14 additional U.S. patents, as well as nine foreign patents. In the quarter, TCS filed for 16 U.S. patents. Thus, as of June 30, the total number of patents issued worldwide in the portfolio was 418, with more than 300 patent applications pending.



News Facts:
The 14 recently added U.S. patents describe innovations in several key communications categories, including:
  • Public Safety: Prank calls to an emergency call center can be a serious threat to public safety and range in levels of severity. The severity of a prank can range from a young child's repeated attempt to hear a voice at the other end of the line to an intentional distraction to divert law enforcement away from ongoing criminal activity, such as drug dealing. All prank calls waste taxpayers money, expend critical resources unnecessarily, and can potentially lead to loss of life or property in the course of emergency response. With already over-burdened resources, emergency call centers require a means to determine if an in-coming call is a valid call for help or a prank. The recently issued "Index of Suspicion Determination for Communications Request" patent (U.S. 8,983,047) describes techniques to assess and determine an index rating of the likelihood that an incoming call is a prank based upon criteria such as the location of the device relative to the event, the known call history of the device, and other historical data.
  • Location: The proliferation of smart phones has resulted in the creation of more than a million applications, ranging from navigation and mapping, to social networking, banking, and music. When invoked, many of these apps place a request to the network for device location and device "presence," an indication that the device is on the wireless network and able to receive calls or data.. Both location and presence services can be message intensive on wireless networks. The recently issued "Location Derived Presence Information" patent (U.S. 8,983,048) is the fourth issued patent in a family of TCS patents that resulted from a TCS carrier customer's request for innovative ways to reduce network traffic. The patent describes methods for providing both device presence and location information as a result of just one location request.
  • GIS/Mapping: Traditional mapping and navigation applications utilized on smartphones require multiple image tiles to render the map data for the application. Retrieving these map tiles from a central server used for a nationwide navigation service can create data bottlenecks and application lag. Many navigation routes will revisit route segments as the user's location changes during the course of traversing the map route, resulting in the application sending repeated and unnecessary requests for the same image tiles to update the map. The recently issued "Image Tile Server" patent (U.S. 9,026,549) overcomes this problem by using an intermediate image tile server that caches recently viewed image tiles. A cached copy of the image tile allows for faster retrieval and an improved user experience. 

The remaining 11 U.S. patents issued in the period are: Tiled Map Display on a Wireless Device (U.S. 9,041,744); System and Method for Positioning in Configured Environments (U.S. 7,511,662); System and Method for Positioning in Configured Environments (U.S. 7,916,074); Web Gateway Multi-Carrier Support (U.S. 9,002,951); End-to-End Logic Tracing of Complex Call Flows in a Distributed Call System (U.S. 9,042,522); System for Efficiently Handling Cryptographic Messages Containing Nonce Values in a Wireless Connectionless Environment (U.S. 9,071,438); Ingress/Egress Call Module (U.S. 9,008,612); Wireless Emergency Services Protocols Translator Between ANSI-41 and VOIP Emergency Services Protocols (U.S. 9,001,719); Transmitter Augmented Radar/Laser Detection Using Local Mobile Network Within a Wide Area Network (U.S. 9,002,347); First Responder Wireless Emergency Alerting with Automatic Callback and Location Triggering (U.S. 8,970,366); Authentication via Motion of Wireless Device Movement patent (U.S. 8,984,591).

TCS Chairman, President and CEO Maurice B. Tose said: "TCS continues to build expertise and intellectual property that covers a number of key areas important to wireless and the connected ecosystems of the future. This patent portfolio protects our products, provides opportunities for royalties, and assists us in defending claims against our products directly and under indemnification provisions of contracts with major customers."

About TeleCommunication Systems, Inc. 

TeleCommunication Systems, Inc. (TCS), headquartered in Annapolis, Maryland, is a world leader in secure and highly reliable wireless communications. Our patented solutions, global presence, operational support and engineering talent enable 9-1-1, commercial location-based services and deployable wireless infrastructure; cybersecurity; defense and aerospace components; and applications for mobile location-based services and messaging. Our principal customers are wireless network operators, defense and public safety government agencies, and Fortune 150 enterprises requiring high reliability and security. Learn more at www.telecomsys.com.

Except for the historical information contained herein, this news release contains forward-looking statements as defined within Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities and Exchange Act of 1934, as amended. These statements are subject to risks and uncertainties and are based upon TCS' current expectations and assumptions that if incorrect would cause actual results to differ materially from those anticipated. Risks include the possibility that no revenues will result from our monetization efforts, that issued patents will prove to be less valuable than assumed, and those detailed from time to time in the Company's SEC reports, including the report on Form 10-K for the year ended December 31, 2014 and on Form 10-Q for the quarter ended March 31, 2015.

Existing and prospective investors are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date hereof. The Company undertakes no obligation to update or revise the information in this press release, whether as a result of new information, future events or circumstances, or otherwise.




Company Contact:
Investor Relations:


TeleCommunication Systems, Inc.
Liolios Group, Inc.
Lora Wilson   
949-929-7234 
Scott Liolios  
949-574-3860

tcs@globalresultspr.com
info@liolios.com





Logo - http://photos.prnewswire.com/prnh/20120503/PH99996LOGO

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/telecommunication-systems-adds-14-us-patents-advancing-public-safety-location-based-services-gismapping-wireless-data-and-messaging-300121075.html

SOURCE TeleCommunication Systems, Inc.
Contact:
TeleCommunication Systems, Inc.
Web: http://www.telecomsys.com


From the Exhibit Floor: Esri User Conference 2015



By Susan Smith





A look at what is being demonstrated on the Exhibit Floor is a great way to see what is trending in the geospatial industry. Location, navigation, GIS positioning, sensors, geospatial intelligence, UAS, 3D, emergency response are just a few of the areas covered in the vast offerings seen throughout the week.


I met with Darren Cottage, vice president Sales and Marketing Geospatial and Traffic, Government sales manager, Kenneth Clay, and North American marketing manager and John Cassidy, general manager, NA Sales & Marketing, Geospatial and Traffic of TomTom to discuss the company’s direction, which included their work with various partners, including Esri, Maponics and CarahSoft.

Announced at the conference was the addition of TomTom’s navigable maps for 13 new countries. TomTom provides traffic content in 134 countries around the globe. TomTom also announced that its map and traffic information had been chosen by the University of Minnesota’s Accessibility Observatory as part of a new national accessibility data set. They will provide map and historical speed data to help analyze accessibility to jobs for driving and transit for metropolitan areas across the U.S.

The analysis of where people live and where jobs are is multimodal, according to Cassidy, and research is done leveraging TomTom’s strategy around the connected world.

TomTom is providing real time GIS data for many application, including for emergency GIS, and they also do pedestrian mapping and indoor mapping.

TomTom products are designed for a lot of consumer devices but also in car navigation, and in geospatial applications such as emergency response.

Clay, who presented on indoor mapping a new focus for TomTom, said that half a dozen of cities are available for demonstration of stadiums, big facilities, and retail establishments

TomTom maps are used to manage where ATM machines are, and for their maintenance. They are also addressing multiple locations for an address, sewer hookup, delivery, and consumer needs. They are also using the advanced city models of startup SmartBetterCities tied into the Esri Story Map template.

TomTom is providing traffic analysis for the Pan American games in Toronto. They also supported the London Olympics and support autonomous vehicle technology.

PinPoint-GIS from Septentrio

The company Septentrio has been around since 2000, and the recently acquired Altus Positioning Systems since 2008. Altus is the supplier of GNSS positioning and surveying systems and GIS, who recently merged with Septentrio, a company known for their work on Galileo, subcontracted to the European Space Agency in Leuven, Belgium.

Septentrio is a spinoff of a university electronics program, IMEC, the Center for NanoTechnology Unit. Their history in research has now evolved into creating scientific receivers for timing and scintillated, signal processing. Meanwhile Septentrio has expanded in the survey and GIS markets. Neil Vancans, vice president of Septentrio, said that they have a channel for surveying developed but there has been no channel for GIS. In Europe there is a channel for both.

Altus Positioning Systems provides simple, affordable high precision receivers that can be used in any tablet as a browser, and can publish into ArcGIS.

Septentrio announced a new software suite called PinPoint-GIS which makes GIS data collection and visualization straightforward. Septentrio’s PinPoint-GIS provides several methods of data collection, based on a standard web browser hosted on the Altus APS-NR2 and a mobile app integrated with Esri’s ArcGIS or other GIS mapping systems.

SAP is known worldwide as a leading provider of business applications, ERP, CRM solutions. With their SAP HANA they provide spatial information of which Hinnerk Gildhoff, Development Manager, SAP HANA/Spatial says 80% is geospatial. At Esri the company announced new capabilities to turbocharge spatial intelligence by simplifying, accelerating and geo-enabling access to enterprise.

“We aim to transform the big apps trend to real time apps, to take action where the data is,” said Gildhoff.

HANA is designed to help break down silos between enterprise and GIS system, and do analytics on a single system. It is an end-to-end platform for running applications. It has engines for running predictive analytics, can do unstructured data mining from Facebook and other social media and can provide geospatial capabilities.

HANA connects ArcGIS and HANA through SBS10 to provide feature services support. Using ArcGIS Server to publish feature services. New functionality includes spatial engine, altitude measurement, time, M-value, and transformation function. The latest release also enhances in-memory spatial processing capabilities to deliver faster responses for millions of data points.

Gildhoff said all applications in SAP are going spatial using the HANA processor as a spatial engine. The SAP Work Manager mobile app has added Esri feature layer integration and offline mapping capabilities.

More on the SAP HANA announcement can be found here



Trimble FieldIQ

In meeting with director of Strategy and Corporate Development, Chris Stern of Trimble, he spoke about how Trimble meets “industry specific challenges” through its core technologies and products. Esri, with whom Trimble has partnered for over 20 years, is organized more around vertical sectors. The two organizations share many joint customers. Their services and solutions include point data collection, mass data collection, aerial and ground based scanning, sensors, point clouds and imagery and integrated industry specific solutions.

Trimble is very focused on Big Data and the Internet of Things with sensors, laser scanning, and optical, bringing in major data this way. The new version of their UAS the UX5 and UX5 HP is a fixed wing unmanned aerial mapping system and the company showcased its new multi-rotor copter that can hover. It is useful for electric transmission inspection, emergency response, and damage reconnaissance. The UX5 and UX5 HP offer aerial data collection by offering complete systems with powerful technologies such as a robust design, a radically simplified workflow and reversed thrust and automatic failsafe procedures.

Trimble’s software eCognition takes content, extracts features and makes datasets. The company has an underlying set of software to access Esri. eCognition addresses the increased demand for 3D data.

“We’ve always been 3D, helping customers collect highly accurate X,Y and Z data and 3D models,” said Stern. “Now there is 3D in ArcGIS Pro and CityEngine. We have Trimble SketchUp 3D Design and the 3D Warehouse – the world’s largest online catalogue of 3D content.”

Stern talked about ArcGIS Earth and the fact that Trimble has 3D already. They have centimeter accuracy in the Trimble V10 imaging rover, with 12 integrated 60MP cameras taking in 360 degree views.

In the Trimble Business Center software, as part of a new feature set, the 360 degree image comes in, the user clicks on a point at the pixel level, and based on the original position, can give you distance of measurements.

Trimble UX5

Another feature is that when imagery is brought in, a set of measurements can be taken specifically for UAS.

With the new Trimble R1 receiver, one of Trimble’s newly introduced line of BYOD GNSS products, users go to the field with the phone with an IOS or Android smartphone or tablet with Trimble Terraflex software – for fast, efficient, geospatial data collection across a fleet of mixed devices that supports submeter accuracy. Their Spectra Precision MobileMapper 300 takes advantage of RTX mobile positioning to achieve centimeter accuracy with Android devices. The goal of these products is to achieve less scientific access to accurate data.

Trimble also introduced at the conference the latest version of its smart water mapping and work management cloud software, Trimble Unity version 2.0. According to company materials, the version adds new capabilities to support complex water, wastewater and stormwater industry asset maintenance planning and work execution workflows. The new release supports Bring Your Own Device (BYOD) GNSS mapping receivers for smart devices and cloud-based single sign-on integration with Esri ArcGIS Online.

Utility customers can search and organize various utility assets, including meters, pipelines, valves and hydrants with Trimble Unity version 2.0 advanced asset maintenance capabilities. These assets can be grouped into prioritized collections of work that can be assigned to crews for completion. The new features enable utilities to reduce the time and cost associated with water asset repair and installation work.

Stern noted that the new Esri GeoCollector includes Esri software of course, and Esri has added Trimble’s R1 and some other Trimble technologies to their offerings.

Summing up, Stern said that Trimble’s core technologies include hardware, software, and positioning/sensors. “We always bring all that together to help customers solve problems across a variety of industries,” he said.

Expanded Portfolio of Geospatial Solutions from Trimble



HERE Intelligent Driving – Traffic probe data combined with sensor technology can make driving safer

The company HERE, Maps for Life, formerly NAVTEQ three years ago, is a Nokia business unit that brings together Nokia’s mapping and location businesses under one umbrella. HERE technology is based on cloud computing, where location data and services are stored on remote servers. Users can access the data on any device.

HERE provides new vector-based data for Esri’s StreetMap brand of mapping products. HERE captures location content that includes road networks, buildings, park and traffic patterns. It licenses or sells that content along with navigation services and location based solutions to other businesses.

HERE has maps in nearly 200 countries, offers voice guided navigation in 94 countries, provides live traffic information in 33 countries and has indoor maps available for about 49,000 unique buildings in 45 countries.



Reality Capture Solutions from Leica Geosystems

Outside the convention center on a trailer was the Leica Geosystems Pegasus: Stream which is said to “measure the invisible.” It is a reality capturing sensor platform for below and above ground mass feature digitization.

Startup Zone
Esri Start up companies who are called “emerging partners” were celebrated at a media event on Monday evening during the Map Gallery. Over 50 startups were exhibiting at the conference. Working with the Esri Startup team, TomTom built a premium content offering for large volume geocoding and routing called “StreetMap Premium for Startups,” a steeply discounted product designed just for those inside the Esri Startup Program.

MetroTech is partnering with both OSI and Esri to aggregate real-time traffic data, apply analytics and publish information that users can use to make decisions. Senior vice president of sales and service delivery, Robert Bruckner, says that traffic is “stuck in the 90s’ technology,” and that MetroTech provides the next generation of traffic analytics.

SenseFly’s eBee mapping drones were exhibited on the main Exhibit Floor but were considered in startup category. The various Styrofoam-appearing eBee models are very lightweight and come in various designs. The eMotion 3D mission planning feature takes elevation data into account when setting altitude of waypoints and resulting flight lines. The models are lightweight so they cannot damage other flying objects or electrical lines. Models are flown by computer, and one is battery operated on an SD card. You can use eBee’s postflight Terra 3D software to process your flight’s photos. In just a few clicks you can transform this imagery into geo-referenced 2D orthomosaics, 3D point clouds, triangle models and Digital Elevation Models (DEMs).

DroneDeploy is a simple cloud based software that allows anyone to create on-demand aerial drone maps in a single click.

Kespry designs a commercial-grade drone system that autonomously collects and analyzes high resolution geospatial information. It is very fast, with fully interconnected software included. It comes with an iPad, drone, limited access to Kespry cloud, and a groundstation. It is 3D printed and made of milled aluminum.

Echosec

Another interesting startup is Echosec, a new location based social media search platform owned by a Russian organization, designed to provide intelligence to public safety, security professionals, marketing, law enforcement, security and governments using crowdsourced data. It can provide actionable information on terror attacks, and law enforcement can see where tweets and Facebook posts are coming from in trying to solve crimes.

MapJam really appeals to media and publishing as well as commercial business with its next generation location mapping platform to empower brands to create and distribute customized maps with contextualized information.

SmartBetterCities (mentioned above in the section on TomTom) offers easy to use 3D software built on ArcGIS for the creation and management of 3D cities. They call this product “CloudCities” and it allows you to configure dashboards online, drag in charts and building data. It can also host a library of scenes such as those from CityEngine and ArcGIS Pro.

FireWhat?

FireWhat? emergency disaster response for wildfires was mentioned in Monday’s plenary session during the segment on “Fire.” The application uses real-time GIS with expert sourced information specifically for fires.

Pufferfish Puffersphere

Pufferfish has created the Puffersphere, a globe that allows you to display digital content in a 3D way, on a globe, using 360 degree video. You can use the basic finger gestures of pinch and push to expand an area or retract it to zoom in or out of a desired geographic location. This is valuable for marketing and advertising, digital display and potentially many other uses where traditional flat screen media just isn’t enough.

Summary
Well established vendors’ offerings continue to push the envelope, making the most of the cloud, actionable intelligence, open source and real-time data to address the most pressing challenges of environment, safety, health and security. Startups arrive on the scene with less legacy baggage that allows them to negotiate the quick paced technology scene with enviable ease. There is a buoyancy to their presentations and enthusiasm that the larger vendors seek and embrace, and will I’m sure make its way into many future major product and application offerings.

APWA-Washington Hosts Regional Asset Management Workshop



Feature presentation by event sponsor, Cityworks




Cityworks joined industry leaders and experts from the greater Washington area, sponsoring and presenting at the first ever APWA’s Washington State Chapter Workshop on the fundamentals of asset management. The event was held at the Pierce County Sewer and Traffic Operations Facility on June 24, 2015.

The Asset Management Committee of APWA’s Washington State Chapter hosted the workshop, which featured a presentation from a former elected official about the value of asset management, followed by a look at the state of asset management in America. The full capacity gathering also had the opportunity to review asset management systems, view a Cityworks software demonstration, and participate in a discussion about the current status and the future direction of asset management as it relates to the APWA-Washington Chapter.

“The APWA-Washington Chapter is committed to emphasizing asset management in our industry and supporting members as they learn more about asset management and begin to implement it in their organizations,” said Toby Rickman, chair of the Asset Management Committee. “In order to establish an effective asset management system, organizations must study and embrace the principles and determine how to use them to improve their own processes. This workshop was a great way to start the discussion and learn more from fellow public works professionals.”

“We [Cityworks] were happy to have been a part of APWA’s first asset management committee event,” said Ryen Tarbet, asset management practice lead for Cityworks. “The APWA continues to demonstrate the kind of leadership and guidance public works officials rely on as they address the challenges and responsibilities associated with the amount of infrastructure its members own and operate. And Cityworks is right there to help them accomplish what could otherwise be an overwhelming task. Sponsoring and participating in events like these is right in line with our commitment to public asset management.”

About Cityworks

Since 1996, Cityworks® has been streamlining the way agencies manage public infrastructure and property by combining the asset geodatabase with the business logic of agencies that care for infrastructure and property. An authoritative resource and system of record, Cityworks elevates Esri®’s ArcGIS® Location Platform specifically to manage workflow, schedule resources and prioritize activities—saving time and improving operational efficiencies. Time-tested and proven technology, Cityworks is Empowering GIS® at more than 500 organizations around the world.

Photographs available on request.

For more information, contact:

Camille Ryser

Communications Specialist

Cityworks® – Azteca Systems

801.523.2751

Email Contact

www.cityworks.com

Copyright © 2015 Azteca Systems, Inc. All rights reserved.

Azteca, Azteca Systems Inc., Cityworks, Cityworks Wireless, MyCityworks.com, cityworks.com, and @cityworks.com are either registered trademarks or trademarks of Azteca Systems Inc. in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.


Timmons Group to Provide Performance Measures for the National Association of State Foresters



The National Association of State Foresters (NASF) recently selected Timmons Group to assist in the process of defining State Forestry Performance Measures. The project will work to define a concise set of national priority performance measures and is supported by a partnership with the USDA Forest Service State & Private Forestry and NASF. Performance measures will be tied to each state’s Forest Action Plan and reported in similar manner to demonstrate the value of trees and forests to the nation.

The National Association of State Foresters is comprised of the directors of state and territorial forestry agencies and the District of Columbia. NASF seeks to advance sustainable forestry, conservation, and protection of forestlands and their associated resources.

State Forestry partners and the USFS provide immeasurable value to the citizens of their states and territories. Forestry agencies provide a wide array of services ranging from the prevention and protection of assets and lives from wildfire to the conservation of key watersheds that provide source drinking water. Other services include urban and community forestry, protection from forest health issues, consultation to private landowners on proper land management and the creation of jobs. With increased pressures on funding, NASF, their partners and the USFS initiated this performance measures project to prioritize the most important “stories” to tell that illustrate the immense value these agencies provide to citizens and wildlife through the conservation of key habitats.

This project will involve a review of best practice performance measures, intense stakeholder engagement, and the development of key metrics that are achievable by all partners. This project will also define the technical delivery solution for displaying measure data to ensure full transparency into work accomplished towards these measures.

“We have a very long history of supporting both State and Federal forestry partners,” said Lowell Ballard, Director of Geospatial Solutions of Timmons Group. “We are very pleased to have the opportunity to help NASF, their partners and the USFS in this important project. All public entities, not just forestry, are under immense pressure to provide a better sense of priority and transparency into the work they perform. This project will emphasize “achievability” and ensure that measures developed, while not easy, will be something that all partners can accomplish.”

“NASF has worked over the past several years to create performance measures that will tell compelling stories about the value of forestry, said Jay Farrell, Executive Director of NASF. “We are pleased to be working with the USFS State and Private Forestry and Timmons Group on this effort.

About Timmons Group Timmons Group is a leading provider of geospatial, information architecture and engineering services focusing on developing intuitive, enterprise geospatial web applications, highly-usable, cross-device mobile applications, and integrated, geospatially-enabled enterprise solutions. Timmons Group develops innovative solutions across multiple platforms, including web, web-based mobile / HTML5, and native mobile operating systems. Timmons Group is a privately held ENR 500 company and maintains a strong business partner relationship with Esri. Timmons Group has been a recipient of both the Esri Business Partner of the Year award and the Esri Foundation Partner of the Year award. For more information, visit www.timmonsgis.com.

Media contact:
Tim Klabunde
Director of Marketing
tim.klabunde@timmons.com

Where are all the GIS Jobs Going?



There have been several layoffs over the last few months due to declining oil and gas prices and trends toward more cost effective management practices. This includes many mergers in the Oil and Gas sector by operators and vendors alike.

This has left many GIS professionals looking for jobs. For those looking in the current market right now, it is imperative to consider market trends and where the field is going. As companies merge, layoffs are typical. This occurs mostly where there is department overlap and overhead cost consideration. GIS falls directly into this category.

This trend can be seen in the following mergers of Oil, Gas and Energy companies:

  • Over the last two years Kinder Morgan, Inc. (KMI) has worked on the merger of all Kinder Morgan companies under KMI into one of the largest conglomerates in North America and the $3 Billion Hiland Acquisition.
  • The merger of Palmetto Engineering of Dallas, Texas and CT&T, of North Little Rock, Arkansas last year.
  • Energy Transfer Company just completed its merger with Regency Energy Partners and has been trying to buy Williams with rumors for merger deals with Oneok and Targa resources.
  • Williams is already planning to merge Williams Cos. with Williams Partners, LP and is also increasing its equity interest in Utica East Ohio Midstream LLC.
  • The recent acquisition of Eagle Mapping Information Systems, Inc. by G2 Partners last December.

This is just the start of these large mergers as more companies try to stave off debt and compete in the international markets. Despite this growing trend The Bureau of Land Management expects a projected 14% job growth, from 54,000 jobs in 2012 to 61,300 jobs by 2022. Despite there being fewer companies to offer jobs. This is due to easier access of geospatial data by the regular consumer through products such as Google Earth and ArcGIS Online. The data used to fuel these applications from behind the scenes will still need creators, administrators and analysts.

As the Geospatial community career trend moves to online web mapping, cloud hosting and 3D visualization more people will have the ability to use geospatial data. OGSpace forecasts a decline in the average power user or the typical GIS analyst position as the end user has easier access to create, process, analyze and report on their own data relevant to their work. As the field grows and changes, it will require more specific knowledge, skills and abilities that many professionals have not developed over the changing technology – whether through habit or budget restraints.

With the availability of free training, GIS offered in high school, and open source GIS platforms becoming more available, the newer workforce will be more adapt to the advancing technology. This promotes a competitive environment as the retirement of older workers positions will need to be filled. While the older generation will still be able to compete for jobs, it will require more personal investment and re-education to the evolving industry standards as less companies exist to apply to.


APP6-C, Collada support added in Luciad V2015.1 update



Luciad has released version 2015.1, its mid-year update to its suite of geospatial software components.

The V2015.1 release includes multiple enhancements and numerous fixes requested by Luciad users.

The enhancements aim to increase performance, facilitate the development of current applications and help users meet current and future project requirements.

Notable new features in V2015.1 include:


Command and control joint missions with the latest APP-6C NATO military symbology
Turn any shape into a tactical graphic
Integrate 3D Collada into any operational picture
Side-by-side multi-feature comparison

“We are pleased to offer a great set of new features to our customers and the geospatial community with V2015.1,” said Luciad Chief Technology Officer Frank Suykens.

“These new features illustrate our commitment to remaining the technology leader in the Defense & Security realm, while also offering technology adapted for many other areas,” said Luciad Chief Commercial Officer Christoph De Preter.

“The side-by-side multi-feature comparison for instance is used by our customers from the intelligence community. Likewise, the feature also allows environmental and agriculture customers to rapidly detect and analyze changes in our environment.”Luciad customers with gold maintenance can receive the V2015.1 update at no additional cost.

More information on Luciad’s geospatial software components and the features included in V2015.1 is available at www.luciad.com or by contactingEmail Contact.

About Luciad
Luciad’s software components are designed for the creation of applications that tackle a range of tasks, from top-level strategy to tactical detail and mission planning to operations debriefing. By connecting directly to data sources, Luciad’s software not only analyzes and visualizes what is happening now, but also helps predict what will happen next – allowing users to act quickly and safely. “Connect, visualize, analyze, act” is both our method and our motto. www.luciad.com


Τετάρτη 29 Ιουλίου 2015

Generalization in GIS



BY CAITLIN DEMPSEY MORAIS




Sometimes GIS data contains an excess of detail or spatial information than what is needed for the scale of the map being prepared. Generalization is the method used in GIS to reduce detail in data. For example, a small scale map of the United States does not need detailed coastlines or a map of California does not need to show every road in the state.

Generalization can be achieved by removing detail, such as only showing major roads, showing only the boundary of a state instead of all the counties. In GIS generalization is also used to smooth out lines, removing small detail such as the nooks and crannies of a coastline or the meanderings of a stream.

Since detail about a geographic feature is simplified during generalization, generalized data is less spatially accurate. Those using generalized data to calculate length, perimeter, or area will incur errors in the calculations.


Generalization in ArcGIS
Depending on whether you are generalizing vector or raster data, there are different tools for generalizing GIS data using ArcGIS. There is a toolset in the Spatial Analyst toolbox in ArcGIS that allows for several different methods of generalization on raster data. The generalization tools in the toolset are grouped into three categories: Aggregating zones of data (Nibble, Shrink, Expand, Region Group, and Thin), smoothing data edges (Boundary Clean and Majority Filter), and reducing the resolution of a raster (Aggregate). For vector data, ArcGIS has aGeneralize tool in the Editing toolset which uses the Douglas-Peucker simplification algorithm to simplify lines. For additional generalization methods, the Generalization toolset found in the Cartography toolbox offers a range of tools for simplifying and reducing resolution of vector data for cartographic purposes.


SMOOTHING A LINE USING THE GENERALIZE TOOL IN ARCGIS. IMAGE: ESRI.


Generalization in QGIS

Generalization of vector data in QGIS can be achieved via the Simplify Geometries tool which is found by selecting Vector -> Geometry Tools -> Simplify geometries. From the popup window, you can then select the line or polygon layer you want to simplify and set the tolerance.



GENERALIZE FUNCTION IN QGIS.


MapShaper

If you need to generalize a GIS dataset independent of a desktop GIS application, MapShaper is a free online tool that allows you to upload a shapefile, GeoJSON, or TopoJSON file. You can then select the simplification method and whether to repair intersections, auto-snap, or prevent shape removal. Once the process has been run, a download link is provided with the newly generalized GIS data.



MAPSHAPER GENERALIZATION TOOL.




Neocartography Can Be Beautiful, Cartography Can Be Fast



Nicolas Regnauld, Product Manager at 1Spatial discuss the debate about neocartography versus traditional cartography and the evolution towards creating more intelligent automation tools for creating maps.
Like the neogeography buzzword that emerged a few years ago, neocartography is a more recent term that also divides the discussion into two camps: traditional cartographers who argue ‘maps need humans to create them using psychology and aesthetics’ verses neocartographers; ‘you have to automate map generation in order to be responsive and accessible’.

The rise of neocartography has been recognized by the International Cartographic Association (ICA) which created a new commission on Neocartography in July of 2014. The creation of this commission at the ICA highlights the growth of this ‘non expert group’ of mappers (ED Parsons)[1] and should promote knowledge exchange between these two groups: non-expert mappers and experienced cartographers. This is a view supported by the chair of the commission Steve Chiltern, who said long before the start of this commission: “My contention is that cartographers need to embrace these neo-cartographers, and work with them in the way that they possibly didn’t with GIS providers/users, and to get out there and influence the way we look at the world”.[2]

One of the problems with neocartography highlighted by R Weibel and G Dutton[3] is the visualisation at different scales which traditionally has relied on manual/human intervention to turn detailed/large scale data into intelligently simplified maps following cartographic principles. Neocartography aims to produce maps from widely-available data sources and tools, with little manual intervention and modification of the data. As a result, it has to rely on quite blunt techniques such as excluding certain data layers at smaller scales, filtering vertices and then just allowing the resulting features’ representations to overlap.

I have worked for many years on automating such process, commonly referred to as generalization, of simplifying detailed spatial data to be able to make maps at smaller scales: first in academia to prove the concept, then at Ordnance Survey GB to develop bespoke generalization processes suitable for production. Now at a commercial software company (1Spatial), the focus is on developing generic generalization solutions which can easily be deployed to a variety of organisations. These initiatives were driven by the need to provide National Mapping Agencies with faster and more cost-effective ways of deriving their maps from their large scale data, without sacrificing the cartographic quality. However, such techniques could also be valuable to the neocartographer. This time the software can help the neocartographer reduce the density of data in a sensible way, in order to produce better quality maps without sacrificing speed and agility.

In the blog ‘The Pitfalls of crowdsourced cartography’ Alan McConchie (an author on the Mapping Mashups blogs site) wrote about the problem faced of ‘spatially varying map quality’. This explains how online maps can sometimes grow at different rates from area to area and at different levels of accuracy and detail. McConchie goes on to say how ‘new techniques for rendering crowdsourced map data will need to be developed that can gracefully handle differences in the level of detail present in the database.’.[4] Again, generalization tools are a plausible solution to this problem. When wanting to map an area which has been captured with varying levels of detail, they can all be generalized to a common level of detail, according to the need of the targeted map.

My exploration work of generalizing crowd-sourced data using 1Spatial’s 1Generalise product is good example of such new techniques in automating map production work, whilst still applying cartographic principles. Crowdsourced data is by its nature a detailed, large-scale dataset, because contributors capture what they see on the ground: buildings, roads, rivers etc. What is often missing is the intelligent aggregation, omission or exaggeration of these features into more abstract concepts that are useful at smaller scales. For example as the scale gets smaller (or in neocartography terms, when you zoom out) a cluster of several buildings could be shown as a couple of simplified rectangles, then as an urban block filling the enclosing roads, then amalgamated as part of a larger town area, then replaced by a town point.

1SPATIAL – OSM DATA STYLED FOR 1:50 DISPLAY


1SPATIAL – GENERALIZED DATA FOR 1:50 DISPLAY

Equally, value judgements may be needed when excluding roads from a network. Rather than just suddenly excluding all minor roads at certain scale it is better to do this selectively and ask: how significant is this minor road? Is it an insignificant short dead-end? Or is it a short dead-end that leads to a hospital?

These sorts of decisions can be automated by rules-based generalization tools because the subjective decisions made by cartographers can be encoded as rules. This means that the effort required to apply cartographic modifications on a per-map basis can instead be invested in the definition of the set of rules that embody this knowledge and so the return on this investment happens every time the map data is processed by the rules. Once configured, this system can automatically produce high-quality cartography from raw data in a repeatable way every time the base data is refreshed.

In conclusion, whether the data set is large or smaller, you can rapidly and automatically generate clear and useable maps using neocartography techniques, but to achieve this you need tools which can encode and automate traditional cartographic skills. Automatic generalization tools will bring to the cartographers the speed that they need, and give the neocartographer better control over the content of their map to increase the quality. So what will come first: the cartographer becoming a neocartographer, or the neocartographer becoming a better cartographer? The race is on, but tomorrow’s cartographer will make good quality maps from a wide range of data sources in little time.

References

[1] Ed Parsons http://www.edparsons.com/2011/03/and-now-there-is-neocartography/ (2011)

[2] Steve Chilton http://googleearthdesign.blogspot.co.uk/2007/08/steve-chilton-interview.html (2007)

[3] R Weibel and G Dutton Generalising spatial data and dealing with multiple representationshttp://www.geos.ed.ac.uk/~gisteac/gis_book_abridged/files/ch10.pdf (1997)

[4] The pitfalls of crowdsourced cartography http://mappingmashups.net/2010/11/03/the-pitfalls-of-crowdsourced-cartography/ (2010)


Topcon Releases ScanMaster v3.05 Software Upgrade

Topcon Positioning Group announces an upgrade to its ScanMaster laser scanner software package with enhanced performance features. ScanMaster v3.05 is designed to handle larger point clouds as commonly collected with the GLS-2000.

The latest version includes memory overflow improvements when the operator creates large point clouds and views wide images, which allows for an even smoother experience than before. The point density display settings have also been modified and optimized for easy adjustment depending on instances when faster performance or better quality is more desirable. The software continues to be backward compatible with earlier scanner models including the GLS-1000 and GLS-1500.

Additionally, the cloud-to-cloud registration functionality allows practitioners using third-party scanners to take advantage of the efficient workflow that Topcon GLS customers have long enjoyed.

Internet: www.topcon-positioning.eu


SuperGIS Server Extends Map Possibilities with Various Resources



This week, Supergeo releases the latest version of SuperGIS Server with plenty of new functions, including related tables, improved JavaScript print tool, and new JavaScript APIs supporting OGC layers. With these new functions, users are able to publish web maps featuring more information, and integrate rich OGC resources with their maps.



In the real world, people may separately store data with tables that are somehow intertwined with each other. When these data tables are related to each other, users can understand more about the geographic phenomenon. For example, by relating location data of weather stations to the annual temperature values, users will be able to track and visualize environmental change and comprehend more about global warming.



In addition to related tables, the latest SuperGIS Server also supports the integration of OGC layers like WMS, WMTS, and KML. In the era of data sharing, many resources are published online, so augmenting the integration of open resources helps users quickly create meaningful maps. The latest SuperGIS Server enables users to add these OGC layers to their Ultra web maps employing JavaScript APIs to effortlessly create more and more professional maps.



Fancy a trial? Register and download SuperGIS Server for a one-month trial at:http://www.supergeotek.com/download_6_server.aspx.

New to SuperGIS Server? Watch the video to get started: https://www.youtube.com/watch?v=5VaX4pEO8zw.

Want to learn more about JavaScript APIs? Go to SGDN and learn by samples:

http://sgdn.supergeotek.com/supergis_server_api_JavaScript.aspx




About Supergeo


Supergeo Technologies Inc. is a leading global provider of GIS software and solutions. Since the establishment, Supergeo has been dedicated to providing state-of-the-art geospatial technologies and comprehensive services for customers around the world. It is our vision to help users utilize geospatial technologies to create a better world.

Supergeo software and applications have been spread over the world to be the backbone of the world’s mapping and spatial analysis. Supergeo is the professional GIS vendor, providing GIS-related users with complete GIS solutions for desktop, mobile, server, and Internet platforms.



Marketing Contact:

Patty Chen

Supergeo Technologies Inc.

5F, No. 71, Sec. 1, Zhouzi St., Taipei, 114, TAIWAN

TEL:+886-2-2659 1899

Website: http://www.supergeotek.com